5
I Use This!
Very Low Activity

News

Analyzed 1 day ago. based on code collected 3 days ago.
Posted over 5 years ago by [email protected] (Henri Bergius)
The 35th Chaos Communication Congress is now over, and it is time to write about how we built the software side of the c-base assembly there. c-base at 35C3 The Chaos Communication Congress is a major fixture of the European security and free ... [More] software scene, with thousands of attendees. As always, the “mother of all hackerspaces” had a big presence there, with a custom booth that we spend nearly two weeks constructing. This year’s theme was “Refreshing Memories”, and accordingly we brought various elements of the history of the c-base space station to the event. On hardware side we had things like a scale model the c-base antenna, as well as vintage arcade machines and various artifacts from over the years. With software, we utilized the existing IoT infrastructure at c-base to control lights, sound, and drive videos and other information to a set of information displays. All of course powered by Flowhub. This was a quite full-stack development effort, involving microcontroller firmware programming, server-side NoFlo and MsgFlo development, and front-end infoscreen web design. We also did quite a bit of devopsing with Travis CI, Docker, and docker-compose. Local MsgFlo setup The first step in bringing c-base’s IoT setup was to prepare a “portable” version of the environment. An MQTT broker, MsgFlo, some components, and a graph with any on-premise c-base hardware or service dependencies removed. As this was for a CCC event, we decided to call it c3-flo (in comparison to the c-flo that we run at c-base). We already have a quite nice setup where our various systems get built and tested on Travis, and uploaded to Docker Hub’s cbase namespace. Some repositories weren’t yet integrated, and so the first step was to Dockerize them. To make the local setup simple to manage, we decided to go with a single docker-compose environment that would start all systems needed. This would be easy to run on any x86 machine, and provide us with a quite comprehensive set of features from the IoT parts to NASA’s Open MCT dashboard. Of course we kept adding to the system throughout 35C3, but in the end the graph looked like the following: WiFi setup To make our setup more portable, we decided to bring a local instance of the “c-base botnet” WiFi used to Congress. This way all of our IoT devices could work at 35C3 with the exact same firmware and networking setup as they do at c-base. Normally Congress doesn’t recommend running your own access point. But if needed, there are guidelines available on how to do it properly if needed. As it happens, out of this year’s 558 unofficial access points, the c-base one was the only one conforming to the guidelines (commentary around the 25 minute mark). Info displays Like any station, c-base has a set of info screens showing various announcements, timelines, and statistics. These are built with Raspberry Pi 3s running Chrome in Kiosk Mode, with a single-page webapp that connects to our MsgFlo infrastructure over WebSockets with msgflo-browser. Each screen has a customized rotation of different pages to show, and we can send URLs to announce events like members arriving to c-base or a space launch livestream via MQTT. For 35C3 we built a new set of pages tailed for the Congress experience: Tweaked version of the normal c-base events view showing current and upcoming talks Video player to rotate various videos from the history of c-base Photo slideshow with a nice set of pictures from c-base Countdown screen for some event (c-base crash, teardown of the assembly at the end of Congress) Crashing c-base Highlight of the whole assembly was a re-enactment of the c-base crash from billions of years ago. Triggered by a dropped bottle of space soda, this was an experience incorporating video, lights, and audio that we ran several times every day of the conference. The c-base crash animation was managed by a NoFlo graph integrated to the our MsgFlo setup with the standard noflo-runtime-msgflo tool. With this we could trigger the “crash” with a MQTT message (sent by a physical button), and run a timed sequence of actions on lights, a sound system, and our info screens. Timeline manager There were some new components that we had to build for this purpose. The most important was a Timeline component that was upstreamed as part of the noflo-tween animation library. With this you can define a multi-tracked timeline as JSON or YAML, with actions triggered on each track on their appropriate second. With MsgFlo this meant we could send timed commands to different devices and create a coordinated experience. For example, our animation started by showing a short video on all info screens. When the bottle fell in the video, we triggered the appropriate soundtrack, and switched the lights through various animation modes. After the video ended, we switched to a “countdown to crash” screen, and turned all lights to a red alert mode. After the crash happened, everything went dark for a few seconds, before the c-base assembly was returned into its normal state. Controlling McLighting All LED strips we used at 35C3 were run using the McLighting firmware. By default it allows switching between different light modes with a simple WebSocket API. For our requirements, we wanted the capability to send new commands to the lights with minimal latency, and to be able to restore the lights to whatever mode they had before the crash started in the end. The component is available in noflo-mclighting. The only thing you need is running the NoFlo graph in the same network as the LED strips, and to send the WebSocket addresses of your LED strips to the component. After that you can control them with normal NoFlo packets. Finally The whole setup took a couple of days to get right, especially regarding timings and tweaking the light modes. But, it was great! You can see a video of it below: And if you’re interested in experimenting this stuff, check out the “portable c-base IoT setup” at https://github.com/c-base/c3-flo. [Less]
Posted over 5 years ago by [email protected] (Henri Bergius)
The 35th Chaos Communication Congress is now over, and it is time to write about how we built the software side of the c-base assembly there. c-base at 35C3 The Chaos Communication Congress is a major fixture of the European security and free ... [More] software scene, with thousands of attendees. As always, the “mother of all hackerspaces” had a big presence there, with a custom booth that we spend nearly two weeks constructing. This year’s theme was “Refreshing Memories”, and accordingly we brought various elements of the history of the c-base space station to the event. On hardware side we had things like a scale model the c-base antenna, as well as vintage arcade machines and various artifacts from over the years. With software, we utilized the existing IoT infrastructure at c-base to control lights, sound, and drive videos and other information to a set of information displays. All of course powered by Flowhub. This was a quite full-stack development effort, involving microcontroller firmware programming, server-side NoFlo and MsgFlo development, and front-end infoscreen web design. We also did quite a bit of devopsing with Travis CI, Docker, and docker-compose. Local MsgFlo setup The first step in bringing c-base’s IoT setup was to prepare a “portable” version of the environment. An MQTT broker, MsgFlo, some components, and a graph with any on-premise c-base hardware or service dependencies removed. As this was for a CCC event, we decided to call it c3-flo (in comparison to the c-flo that we run at c-base). We already have a quite nice setup where our various systems get built and tested on Travis, and uploaded to Docker Hub’s cbase namespace. Some repositories weren’t yet integrated, and so the first step was to Dockerize them. To make the local setup simple to manage, we decided to go with a single docker-compose environment that would start all systems needed. This would be easy to run on any x86 machine, and provide us with a quite comprehensive set of features from the IoT parts to NASA’s Open MCT dashboard. Of course we kept adding to the system throughout 35C3, but in the end the graph looked like the following: WiFi setup To make our setup more portable, we decided to bring a local instance of the “c-base botnet” WiFi used to Congress. This way all of our IoT devices could work at 35C3 with the exact same firmware and networking setup as they do at c-base. Normally Congress doesn’t recommend running your own access point. But if needed, there are guidelines available on how to do it properly if needed. As it happens, out of this year’s 558 unofficial access points, the c-base one was the only one conforming to the guidelines (commentary around the 25 minute mark). Info displays Like any station, c-base has a set of info screens showing various announcements, timelines, and statistics. These are built with Raspberry Pi 3s running Chrome in Kiosk Mode, with a single-page webapp that connects to our MsgFlo infrastructure over WebSockets with msgflo-browser. Each screen has a customized rotation of different pages to show, and we can send URLs to announce events like members arriving to c-base or a space launch livestream via MQTT. For 35C3 we built a new set of pages tailed for the Congress experience: Tweaked version of the normal c-base events view showing current and upcoming talks Video player to rotate various videos from the history of c-base Photo slideshow with a nice set of pictures from c-base Countdown screen for some event (c-base crash, teardown of the assembly at the end of Congress) Crashing c-base Highlight of the whole assembly was a re-enactment of the c-base crash from billions of years ago. Triggered by a dropped bottle of space soda, this was an experience incorporating video, lights, and audio that we ran several times every day of the conference. The c-base crash animation was managed by a NoFlo graph integrated to the our MsgFlo setup with the standard noflo-runtime-msgflo tool. With this we could trigger the “crash” with a MQTT message (sent by a physical button), and run a timed sequence of actions on lights, a sound system, and our info screens. Timeline manager There were some new components that we had to build for this purpose. The most important was a Timeline component that was upstreamed as part of the noflo-tween animation library. With this you can define a multi-tracked timeline as JSON or YAML, with actions triggered on each track on their appropriate second. With MsgFlo this meant we could send timed commands to different devices and create a coordinated experience. For example, our animation started by showing a short video on all info screens. When the bottle fell in the video, we triggered the appropriate soundtrack, and switched the lights through various animation modes. After the video ended, we switched to a “countdown to crash” screen, and turned all lights to a red alert mode. After the crash happened, everything went dark for a few seconds, before the c-base assembly was returned into its normal state. Controlling McLighting All LED strips we used at 35C3 were run using the McLighting firmware. By default it allows switching between different light modes with a simple WebSocket API. For our requirements, we wanted the capability to send new commands to the lights with minimal latency, and to be able to restore the lights to whatever mode they had before the crash started in the end. The component is available in noflo-mclighting. The only thing you need is running the NoFlo graph in the same network as the LED strips, and to send the WebSocket addresses of your LED strips to the component. After that you can control them with normal NoFlo packets. Finally The whole setup took a couple of days to get right, especially regarding timings and tweaking the light modes. But, it was great! You can see a video of it below: And if you’re interested in experimenting this stuff, check out the “portable c-base IoT setup” at https://github.com/c-base/c3-flo. [Less]
Posted about 6 years ago by [email protected] (Henri Bergius)
Fine particulate matter is a serious issue in many cities around the world. In Europe, it is estimated to cause 400.000 premature deaths per year. European Union has published standards on the matter, and warned several countries that haven’t been ... [More] able to reach the safe limits. Germany saw the highest number of deaths attributable to all air pollution sources, at 80,767. It was followed by the United Kingdom (64,351) and France (63,798). These are also the most populated countries in Europe. (source: DW) The associated health issues don’t come cheap: 20 billion euros per year on health costs alone. “To reduce this figure we need member states to comply with the emissions limits which they have agreed to,” Schinas said. “If this is not the case the Commission as guardian of the (founding EU) treaty will have to take appropriate action,” he added. (source: phys.org) One part of solving this issue is better data. Government-run measurement stations are quite sparse, and — in some countries — their published results can be unreliable. To solve this, Open Knowledge Foundation Germany started the luftdaten.info project to crowdsource air pollution data around the world. Last saturday we hosted a luftdaten.info workshop at c-base, and used the opportunity to build and deploy some particulate matter sensors. While luftdaten.info has a great build guide and we used their parts list, we decided to go with a custom firmware built with MicroFlo and integrated with the existing IoT network at c-base. MicroFlo on ESP8266 MicroFlo is a flow-based programming runtime targeting microcontrollers. Just like NoFlo graphs run inside a browser or Node.js, the MicroFlo graphs run on an Arduino or other compatible device. The result of a MicroFlo build is a firmware that can be flashed on a microcontroller, and which can be live-programmed using tools like Flowhub. ESP8266 is an Arduino-compatible microcontroller with integrated WiFi chip. This means any sensors or actuators on the device can easily connect to other systems, like we do with lots of different sensors already at c-base. MicroFlo recently added a feature where Wifi-enabled MicroFlo devices can automatically connect with a MQTT message queue and expose their in/outports as queues there. This makes MicroFlo on an ESP8266 a fully-qualified MsgFlo participant. Building the firmware We wanted to build a firmware that would periodically read both the DHT22 temperature and humidity sensor, and the SDS011 fine particulate sensor, even out the readings with a running median, and then send the values out at a specified interval. MicroFlo’s core library already provided most of the building blocks, but we had to write custom components for dealing with the sensor hardware. Thankfully Arduino libraries existed for both sensors, and this was just a matter of wrapping those to the MicroFlo component interface. After the components were done, we could build the firmware as a Flowhub graph: To verify the build we enabled Travis CI where we build the firmware both against the MicroFlo Arduino and Linux targets. The Arduino one is there to verify that the build works with all the required libraries, and the Linux build we can use for test automation with fbp-spec. To flash the actual devices you need the Arduino IDE and Node.js. Then use MicroFlo to generate the .ino file, and flash that to the device with the IDE. WiFi and MQTT settings can be tweaked in the secrets.h and config.h files. Sensor deployment The recommended weatherproofing solution for these sensors is quite straightforward: place the hardware in a piece of drainage pipe with the ends turned downwards. Since we had two sensors, we decided to install one in the patio, and the other in the c-base main hall: Working with the sensor data Once the sensor devices had been flashed, they became available in our MsgFlo setup and could be connected with other systems: In our case, we wanted to do two things with the data: Log it in the c-base telemetry system to be visualized with NASA OpenMCT Submit the data from the outdoor sensor to the luftdaten.info database The first one was just a matter of adding couple of configuration lines to our OpenMCT server. For the latter, I built a simple Python component. Our sensors have been tracking for a couple of days now. The public data can be seen in the madavi service: We’ve submitted our sensor for inclusion in the luftdaten.info database, and hopefully soon there will be another covered area in the Berlin air quality map: If you’d like to build your own air quality sensor, the instructions on luftdaten.info are pretty comperehensive. Get the parts from your local electronics store or AliExpress, connect them together, flash the firmware, and be part of the public effort to track and improve air quality! Our MicroFlo firmware is a great alternative if you want to do further analysis of the data yourself, or simply want to get the data on MQTT. [Less]
Posted about 6 years ago by [email protected] (Henri Bergius)
Fine particulate matter is a serious issue in many cities around the world. In Europe, it is estimated to cause 400.000 premature deaths per year. European Union has published standards on the matter, and warned several countries that haven’t been ... [More] able to reach the safe limits. Germany saw the highest number of deaths attributable to all air pollution sources, at 80,767. It was followed by the United Kingdom (64,351) and France (63,798). These are also the most populated countries in Europe. (source: DW) The associated health issues don’t come cheap: 20 billion euros per year on health costs alone. “To reduce this figure we need member states to comply with the emissions limits which they have agreed to,” Schinas said. “If this is not the case the Commission as guardian of the (founding EU) treaty will have to take appropriate action,” he added. (source: phys.org) One part of solving this issue is better data. Government-run measurement stations are quite sparse, and — in some countries — their published results can be unreliable. To solve this, Open Knowledge Foundation Germany started the luftdaten.info project to crowdsource air pollution data around the world. Last saturday we hosted a luftdaten.info workshop at c-base, and used the opportunity to build and deploy some particulate matter sensors. While luftdaten.info has a great build guide and we used their parts list, we decided to go with a custom firmware built with MicroFlo and integrated with the existing IoT network at c-base. MicroFlo on ESP8266 MicroFlo is a flow-based programming runtime targeting microcontrollers. Just like NoFlo graphs run inside a browser or Node.js, the MicroFlo graphs run on an Arduino or other compatible device. The result of a MicroFlo build is a firmware that can be flashed on a microcontroller, and which can be live-programmed using tools like Flowhub. ESP8266 is an Arduino-compatible microcontroller with integrated WiFi chip. This means any sensors or actuators on the device can easily connect to other systems, like we do with lots of different sensors already at c-base. MicroFlo recently added a feature where Wifi-enabled MicroFlo devices can automatically connect with a MQTT message queue and expose their in/outports as queues there. This makes MicroFlo on an ESP8266 a fully-qualified MsgFlo participant. Building the firmware We wanted to build a firmware that would periodically read both the DHT22 temperature and humidity sensor, and the SDS011 fine particulate sensor, even out the readings with a running median, and then send the values out at a specified interval. MicroFlo’s core library already provided most of the building blocks, but we had to write custom components for dealing with the sensor hardware. Thankfully Arduino libraries existed for both sensors, and this was just a matter of wrapping those to the MicroFlo component interface. After the components were done, we could build the firmware as a Flowhub graph: To verify the build we enabled Travis CI where we build the firmware both against the MicroFlo Arduino and Linux targets. The Arduino one is there to verify that the build works with all the required libraries, and the Linux build we can use for test automation with fbp-spec. To flash the actual devices you need the Arduino IDE and Node.js. Then use MicroFlo to generate the .ino file, and flash that to the device with the IDE. WiFi and MQTT settings can be tweaked in the secrets.h and config.h files. Sensor deployment The recommended weatherproofing solution for these sensors is quite straightforward: place the hardware in a piece of drainage pipe with the ends turned downwards. Since we had two sensors, we decided to install one in the patio, and the other in the c-base main hall: Working with the sensor data Once the sensor devices had been flashed, they became available in our MsgFlo setup and could be connected with other systems: In our case, we wanted to do two things with the data: Log it in the c-base telemetry system to be visualized with NASA OpenMCT Submit the data from the outdoor sensor to the luftdaten.info database The first one was just a matter of adding couple of configuration lines to our OpenMCT server. For the latter, I built a simple Python component. Our sensors have been tracking for a couple of days now. The public data can be seen in the madavi service: We’ve submitted our sensor for inclusion in the luftdaten.info database, and hopefully soon there will be another covered area in the Berlin air quality map: If you’d like to build your own air quality sensor, the instructions on luftdaten.info are pretty comperehensive. Get the parts from your local electronics store or AliExpress, connect them together, flash the firmware, and be part of the public effort to track and improve air quality! Our MicroFlo firmware is a great alternative if you want to do further analysis of the data yourself, or simply want to get the data on MQTT. [Less]
Posted about 6 years ago by [email protected] (Henri Bergius)
Version 1.1 of NoFlo shipped this week with a new convenient way to write components. With the noflo.asComponent helper you can turn any JavaScript function into a well-behaved NoFlo component with minimal boilerplate. Usage of noflo.asComponent is ... [More] quite simple: const noflo = require('noflo'); exports.getComponent = () => noflo.asComponent(Math.random); In this case we have a function that doesn’t take arguments. We detect this, and produce a component with a single “bang” port for invoking the function: You can also amend the component with helpful information like a textual description and and icon: const noflo = require('noflo'); exports.getComponent = () => noflo.asComponent(Math.random, { description: 'Generate a random number', icon: 'random', }); Multiple inputs The example above was with a function that does not take any arguments. With functions that accept arguments, each of them becomes an input port. const noflo = require('noflo'); function findItemsWithId(items, id) { return items.filter((item) => item.id === id); } exports.getComponent = () => noflo.asComponent(findItemsWithId); The function will be called when both input ports have a packet available. Output handling The asComponent helper handles three types of functions: Regular synchronous functions: return value gets sent to out. Thrown errors get sent to error Functions returning a Promise: resolved promises get sent to out, rejected promises to error Functions taking a Node.js style asynchronous callback: err argument to callback gets sent to error, result gets sent to out With this, it is quite easy to write wrappers for asynchronous operations. For example, to call an external REST API with the Fetch API: const noflo = require('noflo'); function getFlowhubStats() { return fetch('https://api.flowhub.io/stats') .then((result) => result.json()); } exports.getComponent = () => noflo.asComponent(getFlowhubStats); How that you have this component, it is quick to do a graph utilizing it (open in Flowhub): Here we get the BODY element of the browser runtime. When that has been loaded, we trigger the fetch component above. If the request succeeds, we process it through a string template to write a quick report to the page. If it fails, we grab the error message and write that. Making the components discoverable The default location for a NoFlo component is components/ComponentName.js inside your project folder. Add your new components to this folder, and NoFlo will be able to run them. If you’re using Flowhub, you can also write the components in the integrated code editor, and they will be sent to the runtime. We’ve already updated the hosted NoFlo browser runtime to 1.1, so you can get started with this new component API right away. Advanced components In many ways, asComponent is the inverse of the asCallback embedding feature we introduced a year ago: asComponent turns a regular JavaScript function into a NoFlo component; asCallback turns a NoFlo component (or graph) into a regular JavaScript function. If you need to work with more complex firing patterns, like combining streams or having control ports, you can of course still write regular Process API components. The regular component API is quite a bit more verbose, but at the same time gives you full access to NoFlo APIs for dealing with manually controlled preconditions, state management, and creating generators. However, thinking about the hundreds of NoFlo components out there, most of them could be written much more simply with asComponent. This will hopefully make the process of developing NoFlo programs a lot more straightforward. Read more NoFlo component documentation and asComponent API docs. [Less]
Posted about 6 years ago by [email protected] (Henri Bergius)
Version 1.1 of NoFlo shipped this week with a new convenient way to write components. With the noflo.asComponent helper you can turn any JavaScript function into a well-behaved NoFlo component with minimal boilerplate. Usage of noflo.asComponent is ... [More] quite simple: const noflo = require('noflo'); exports.getComponent = () => noflo.asComponent(Math.random); In this case we have a function that doesn’t take arguments. We detect this, and produce a component with a single “bang” port for invoking the function: You can also amend the component with helpful information like a textual description and and icon: const noflo = require('noflo'); exports.getComponent = () => noflo.asComponent(Math.random, { description: 'Generate a random number', icon: 'random', }); Multiple inputs The example above was with a function that does not take any arguments. With functions that accept arguments, each of them becomes an input port. const noflo = require('noflo'); function findItemsWithId(items, id) { return items.filter((item) => item.id === id); } exports.getComponent = () => noflo.asComponent(findItemsWithId); The function will be called when both input ports have a packet available. Output handling The asComponent helper handles three types of functions: Regular synchronous functions: return value gets sent to out. Thrown errors get sent to error Functions returning a Promise: resolved promises get sent to out, rejected promises to error Functions taking a Node.js style asynchronous callback: err argument to callback gets sent to error, result gets sent to out With this, it is quite easy to write wrappers for asynchronous operations. For example, to call an external REST API with the Fetch API: const noflo = require('noflo'); function getFlowhubStats() { return fetch('https://api.flowhub.io/stats') .then((result) => result.json()); } exports.getComponent = () => noflo.asComponent(getFlowhubStats); How that you have this component, it is quick to do a graph utilizing it (open in Flowhub): Here we get the BODY element of the browser runtime. When that has been loaded, we trigger the fetch component above. If the request succeeds, we process it through a string template to write a quick report to the page. If it fails, we grab the error message and write that. Making the components discoverable The default location for a NoFlo component is components/ComponentName.js inside your project folder. Add your new components to this folder, and NoFlo will be able to run them. If you’re using Flowhub, you can also write the components in the integrated code editor, and they will be sent to the runtime. We’ve already updated the hosted NoFlo browser runtime to 1.1, so you can get started with this new component API right away. Advanced components In many ways, asComponent is the inverse of the asCallback embedding feature we introduced a year ago: asComponent turns a regular JavaScript function into a NoFlo component; asCallback turns a NoFlo component (or graph) into a regular JavaScript function. If you need to work with more complex firing patterns, like combining streams or having control ports, you can of course still write regular Process API components. The regular component API is quite a bit more verbose, but at the same time gives you full access to NoFlo APIs for dealing with manually controlled preconditions, state management, and creating generators. However, thinking about the hundreds of NoFlo components out there, most of them could be written much more simply with asComponent. This will hopefully make the process of developing NoFlo programs a lot more straightforward. Read more NoFlo component documentation and asComponent API docs. [Less]
Posted about 6 years ago by [email protected] (Henri Bergius)
Version 1.1 of NoFlo shipped this week with a new convenient way to write components. With the noflo.asComponent helper you can turn any JavaScript function into a well-behaved NoFlo component with minimal boilerplate. Usage of noflo.asComponent is ... [More] quite simple: const noflo = require('noflo'); exports.getComponent = () => noflo.asComponent(Math.random); In this case we have a function that doesn’t take arguments. We detect this, and produce a component with a single “bang” port for invoking the function: You can also amend the component with helpful information like a textual description and and icon: const noflo = require('noflo'); exports.getComponent = () => noflo.asComponent(Math.random, { description: 'Generate a random number', icon: 'random', }); Multiple inputs The example above was with a function that does not take any arguments. With functions that accept arguments, each of them becomes an input port. const noflo = require('noflo'); function findItemsWithId(items, id) { return items.filter((item) => item.id === id); } exports.getComponent = () => noflo.asComponent(findItemsWithId); The function will be called when both input ports have a packet available. Output handling The asComponent helper handles three types of functions: Regular synchronous functions: return value gets sent to out. Thrown errors get sent to error Functions returning a Promise: resolved promises get sent to out, rejected promises to error Functions taking a Node.js style asynchronous callback: err argument to callback gets sent to error, result gets sent to out With this, it is quite easy to write wrappers for asynchronous operations. For example, to call an external REST API with the Fetch API: const noflo = require('noflo'); function getFlowhubStats() { return fetch('https://api.flowhub.io/stats') .then((result) => result.json()); } exports.getComponent = () => noflo.asComponent(getFlowhubStats); How that you have this component, it is quick to do a graph utilizing it (open in Flowhub): Here we get the BODY element of the browser runtime. When that has been loaded, we trigger the fetch component above. If the request succeeds, we process it through a string template to write a quick report to the page. If it fails, we grab the error message and write that. Making the components discoverable The default location for a NoFlo component is components/ComponentName.js inside your project folder. Add your new components to this folder, and NoFlo will be able to run them. If you’re using Flowhub, you can also write the components in the integrated code editor, and they will be sent to the runtime. We’ve already updated the hosted NoFlo browser runtime to 1.1, so you can get started with this new component API right away. Advanced components In many ways, asComponent is the inverse of the asCallback embedding feature we introduced a year ago: asComponent turns a regular JavaScript function into a NoFlo component; asCallback turns a NoFlo component (or graph) into a regular JavaScript function. If you need to work with more complex firing patterns, like combining streams or having control ports, you can of course still write regular Process API components. The regular component API is quite a bit more verbose, but at the same time gives you full access to NoFlo APIs for dealing with manually controlled preconditions, state management, and creating generators. However, thinking about the hundreds of NoFlo components out there, most of them could be written much more simply with asComponent. This will hopefully make the process of developing NoFlo programs a lot more straightforward. Read more NoFlo component documentation and asComponent API docs. [Less]
Posted about 6 years ago by [email protected] (Henri Bergius)
When building IoT systems, it is often useful to have access to data from the outside world to amend the information your sensors give you. For example, indoor temperature and energy usage measurements will be a lot more useful if there is ... [More] information on the outside weather to correlate with. Thanks to the open data movement, there are many data sets available. However, many of these are hard to discover or available in obscure formats. The BIG IoT marketplace BIG IoT is an EU-funded research project to make datasets easier to share and discover between organizations. With it there is a common semantic standard for how datasets are served, and a centralized marketplace for discovering and subscribing to data offerings. For data providers this means they can focus on providing correct information, and let the marketplace handle API tokens, discoverability, and — for commercial datasets — billing For data consumers there is a single place and a single API to access multiple datasets. No need to handle different Terms of Usage or different API conventions As an example, if you’re building a car navigation application, you can use BIG IoT to get access to multiple providers of routing services, traffic delay information, or parking spots. If a dataset comes online in a new city, it’ll automatically work with your application. No need for contract negotiations, just a query to find matching providers on-demand. Flowhub and BIG IoT Last summer Flowhub was one of the companies accepted into the BIG IoT first open call. In it, we received some funding to make it possible to publish data from Flowhub and NoFlo on the marketplace. In this video I’m talking about the project: In the project we built three things: BIG IoT JavaScript library – a Node.js library for publishing datasets in the BIG IoT marketplace Flowhub BIG IoT bridge – a set of NoFlo components for creating BIG IoT providers Deutsche Bahn and Cologne parking offerings – a set of live examples of integrating existing IoT datasets with the marketplace using Flowhub Creating a data provider While it is easy enough to use the BIG IoT Java library to publish datasets, the Flowhub integration we built it makes it even easier. You need your data source available on a message queue, a web API, or maybe a timeseries database. And then you need NoFlo and the flowhub-bigiot-bridge library. The basic building block is the Provider component. This creates a Node.js application server to serve your datasets, and registers them to the BIG IoT marketplace. What you need to do is to describe your data offering. For this, you can use the CreateOffering component. You can use IIPs to categorize the data, and then a set of CreateDatatype components to describe the input and output structure your offering uses. Finally, the request and response ports of the Provider need to be hooked to your data source. The request outport will send packets with whatever input data your subscribers provided, and you need to send the resulting output data to the response port. For real-world deployment, the Flowhub BIG IoT bridge repository also includes examples on how to test your offerings, and how to build and deploy them with Docker. Here’s how a full setup with two different parking datasets looks like: If you’re participating in the Bosch Connected World hackathon in Berlin next week, we’ll be there with the BIG IoT team to help projects to utilize the BIG IoT datasets. This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 688038. [Less]
Posted about 6 years ago by [email protected] (Henri Bergius)
When building IoT systems, it is often useful to have access to data from the outside world to amend the information your sensors give you. For example, indoor temperature and energy usage measurements will be a lot more useful if there is ... [More] information on the outside weather to correlate with. Thanks to the open data movement, there are many data sets available. However, many of these are hard to discover or available in obscure formats. The BIG IoT marketplace BIG IoT is an EU-funded research project to make datasets easier to share and discover between organizations. With it there is a common semantic standard for how datasets are served, and a centralized marketplace for discovering and subscribing to data offerings. For data providers this means they can focus on providing correct information, and let the marketplace handle API tokens, discoverability, and — for commercial datasets — billing For data consumers there is a single place and a single API to access multiple datasets. No need to handle different Terms of Usage or different API conventions As an example, if you’re building a car navigation application, you can use BIG IoT to get access to multiple providers of routing services, traffic delay information, or parking spots. If a dataset comes online in a new city, it’ll automatically work with your application. No need for contract negotiations, just a query to find matching providers on-demand. Flowhub and BIG IoT Last summer Flowhub was one of the companies accepted into the BIG IoT first open call. In it, we received some funding to make it possible to publish data from Flowhub and NoFlo on the marketplace. In this video I’m talking about the project: In the project we built three things: BIG IoT JavaScript library – a Node.js library for publishing datasets in the BIG IoT marketplace Flowhub BIG IoT bridge – a set of NoFlo components for creating BIG IoT providers Deutsche Bahn and Cologne parking offerings – a set of live examples of integrating existing IoT datasets with the marketplace using Flowhub Creating a data provider While it is easy enough to use the BIG IoT Java library to publish datasets, the Flowhub integration we built it makes it even easier. You need your data source available on a message queue, a web API, or maybe a timeseries database. And then you need NoFlo and the flowhub-bigiot-bridge library. The basic building block is the Provider component. This creates a Node.js application server to serve your datasets, and registers them to the BIG IoT marketplace. What you need to do is to describe your data offering. For this, you can use the CreateOffering component. You can use IIPs to categorize the data, and then a set of CreateDatatype components to describe the input and output structure your offering uses. Finally, the request and response ports of the Provider need to be hooked to your data source. The request outport will send packets with whatever input data your subscribers provided, and you need to send the resulting output data to the response port. For real-world deployment, the Flowhub BIG IoT bridge repository also includes examples on how to test your offerings, and how to build and deploy them with Docker. Here’s how a full setup with two different parking datasets looks like: If you’re participating in the Bosch Connected World hackathon in Berlin next week, we’ll be there with the BIG IoT team to help projects to utilize the BIG IoT datasets. This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 688038. [Less]
Posted over 6 years ago by [email protected] (Henri Bergius)
After six years of work, and bunch of different projects done with NoFlo, we’re finally ready for the big 1.0. The two primary pull requests for the 1.0.0 cycle landed today, and so it is time to talk about how to prepare for it. tl;dr If your ... [More] project runs with NoFlo 0.8 without deprecation warnings, you should be ready for NoFlo 1.0 ES6 first The primary difference between NoFlo 0.8 and 1.0 is that now we’re shipping it as ES6 code utilizing features like classes and arrow functions. Now that all modern browsers support ES6 out of the box, and Node.js 8 is the long-term supported release, it should be generally safe to use ES6 as-is. If you need to support older browsers, Node.js versions, or maybe PhantomJS, it is of course possible to compile the NoFlo codebase into ES5 using Babel. We recommend new components to be written in ES6 instead of CoffeeScript. Easier webpack builds It has been possible to build NoFlo projects for browsers since 2013. Last year we switched to webpack as the module bundler. However, at that stage there was still quite a lot of configuration magic happening inside grunt-noflo-browser. This turned out to be sub-optimal since it made integrating NoFlo into existing project build setups difficult. Last week we extracted the difficult parts out of the Grunt plugin, and released the noflo-component-loader webpack loader. With this, you can generate a configured NoFlo component loader in any webpack build. See this example. In addition to generating the component loader, your NoFlo browser project may also need two other loaders, depending how your NoFlo graphs are built: json-loader for JSON graphs, and fbp-loader for graphs defined in the .fbp DSL. Removed APIs There were several old NoFlo APIs that we marked as deprecated in NoFlo 0.8. In that series, usage of those APIs logged warnings. Now in 1.0 the deprecated APIs are completely removed, giving us a lighter, smaller codebase to maintain. Here is a list of the primary API removals and the suggested migration strategy: noflo.AsyncComponent class: use WirePattern or Process API instead noflo.ArrayPort class: use InPort/OutPort with addressable: true instead noflo.Port class: use InPort/OutPort instead noflo.helpers.MapComponent function: use WirePattern or Process API instead noflo.helpers.WirePattern legacy mode: now WirePattern always uses Process API internally noflo.helpers.WirePattern synchronous mode: use async: true and callback noflo.helpers.MultiError function: send errors via callback or error port noflo.InPort process callback: use Process API noflo.InPort handle callback: use Process API noflo.InPort receive method: use Process API getX methods noflo.InPort contains method: use Process API hasX methods Subgraph EXPORTS mechanism: disambiguate with INPORT/OUTPORT The easiest way to verify whether your project is compatible is to run it with NoFlo 0.8. You can also make usage of deprecated APIs throw errors instead of just logging them by setting the NOFLO_FATAL_DEPRECATED environment variable. In browser applications you can set the same flag to window. Scopes Scopes are a flow isolation mechanism that was introduced in NoFlo 0.8. With scopes, you can run multiple simultaneous flows through a NoFlo network without a risk of data leaking from one scope to another. The primary use case for scope isolation is building things like web API servers, where you want to isolate the processing of each HTTP request from each other safely, while reusing a single NoFlo graph. Scope isolation is handled automatically for you when using Process API or WirePattern. If you want to manipulate scopes, the noflo-packets library provides components for this. NoFlo in/outports can also be set as scoped: false to support getting out of scopes. asCallback and async/await noflo.asCallback provides an easy way to expose NoFlo graphs to normal JavaScript consumers. The produced function uses the standard Node.js callback mechanism, meaning that you can easily make it return promises with Node.js util.promisify or Bluebird. After this your NoFlo graph can be run via normal async/await. Component libraries There are hundreds of ready-made NoFlo components available on NPM. By now, most of these have been adapted to work with NoFlo 0.8. Once 1.0 ships, we’ll try to be as quick as possible to update all of them to run with it. In the meanwhile, it is possible to use npm shrinkwrap to force them to depend on NoFlo 1.0. If you’re relying on a library that uses deprecated APIs, or hasn’t otherwise been updated yet, please file an issue in the GitHub repo of that library. This pull request for noflo-gravatar is a great example of how to implement all the modernization recommendations below in an existing component library. Recommendations for new projects This post has mostly covered how to adapt existing NoFlo projects for 1.0. How about new projects? Here are some recommendations: While NoFlo projects have traditionally been written in CoffeeScript, for new projects we recommend using ES6. In particular, follow the AirBnB ES6 guidelines Use fbp-spec for test automation Use NPM scripts instead of Grunt for building and testing Make browser builds with webpack utilizing noflo-component-loader Use Process API when writing components If you expose any library functionality, provide an index file using noflo.asCallback for non-NoFlo consumers The BIG IoT Node.js bridge is a recent project that follows these guidelines if you want to see an example in action. There is also a project tutorial available on the NoFlo website. [Less]