Activity Not Available
33
I Use This!

News

Analyzed 3 months ago. based on code collected over 1 year ago.
Posted 4 days ago
Many of us deal or will deal with (connected) M2M/IoT devices. This might be writing firmware for microcontrollers, using a RTOS like NuttX or a full blown Unix (like) operating system like FreeBSD or Yocto/Poky Linux, creating and building code to ... [More] run on the device, processing data in the backend or somewhere inbetween. Many of these devices will have sensors to collect data like GNSS position/time, temperature, light detector, measuring acceleration, see airplanes, detect lightnings, etc.The backend problem is work but mostly “solved”. One can rely on something like Amazon IoT or creating a powerful infrastructure using many of the FOSS options for message routing, data storage, indexing and retrieval in C++. In this post I want to focus about the little detail of how data can go from the device to the backend. Image from Wikipedia (CC BY-SA 3.0 by Herzi Pinki) To make this thought experiment a bit more real let’s imagine we want to build a bicycle lock/tracker. Many of my colleagues ride their bicycle to work and bikes being stolen remains a big tragedy. So the primary focus of an IoT device would be to prevent theft (make other bikes a more easy target) or making selling a stolen bicycle more difficult (e.g. by easily checking if something has been stolen) and in case it has been stolen to make it more easy to find the current location. Architecture Let’s assume two different architectures. One possibility is to have the bicycle actively acquire the position and then try to push this information to a server (“active push”). Another approach is to have fixed installed scanning stations or users to scan/report bicycles (“passive pull”). Both lead to very different designs. Active Push The system would need some sort of GNSS module, a microcontroller or some full blown SoC to run Linux, an accelerator meter and maybe more sensors. It should somehow fit into an average bicycle frame, have good antennas to work from inside the frame, last/work for the lifetime of a bicycle and most importantly a way to bridge the air-gap from the bicycle to the server. Push architecture   Passive Pull The device would not know its position or if it is moved. It might be a simple barcode/QR code/NFC/iBeacon/etc. In case of a barcode it could be the serial number of the frame and some owner/registration information. In case of NFC it should be a randomized serial number (if possible to increase privacy). Users would need to scan the barcode/QR-code and an application would annotate the found bicycle with the current location (cell towers, wifi networks, WGS 84 coordinate) and upload it to the server. For NFC the smartphone might be able to scan the tag and one can try to put readers at busy locations. The incentive for the app user is to feel good collecting points for scanning bicycles, maybe some rewards if a stolen bicycle is found. Buyers could easily check bicycles if they were reported as stolen (not considering the difficulty of how to establish ownership). Pull architecture   Technology requirements The technologies that come to my mind are Barcode, QR-Code, play some humanly not hearable noise and decode in an app, NFC, ZigBee, 6LoWPAN, Bluetooth, Bluetooth Smart, GSM, UMTS, LTE, NB-IOT. Next I will look at the main differentiation/constraints of these technologies and provide a small explanation and finish how these constraints interact with each other.  World wide usable Radio Technology operates on a specific set of radio frequencies (Bands). Each country may manage these frequencies separately and this can lead to having to use the same technology on different bands depending on the current country. This will increase the complexity of the antenna design (or require multiple of them), make mechanical design more complex, makes software testing more difficult, production testing, etc. Or there might be multiple users/technologies on the same band (e.g. wifi + bluetooth or just too many wifis). Power consumption Each radio technology requires to broadcast and might require to listen or permanently monitor the air for incoming messages (“paging”). With NFC the scanner might be able to power the device but for other technologies this is unlikely to be true. One will need to define the lifetime of the device and the size of the battery or look into ways of replacing/recycling batteries or to charge them. Range Different technologies were designed to work with sender/receiver being away at different min/max. distances (and speeds but that is not relevant for the lock nor is the bandwidth for our application). E.g. with Near Field Communication (NFC) the workable range is meters while with GSM it will be many kilometers and with UMTS the cell size depends on how many phones are currently using it (the cell is breathing). Pick two of three Ideally we want something that works over long distances, requires no battery to send/receive and the system is still pushing out the position/acceleration/event report to servers. Sadly this is not how reality works and we will have to set priorities. The more bands to support, the more complicated the antenna design, production, calibration, testing. It might be that one technology does not work in all countries or that it is not equally popular or the market situation is different, e.g. some cities have city wide public hotspots, some don’t. Higher power transmission increases the range but increases the power consumption even more. More current will be used during transmission which requires a better hardware design to buffer the spikes, a bigger battery and ultimately a way to charge or efficiently replace batteries.Given these constraints it is time to explore some technologies. I will use the one already mentioned at the beginning of this section. Technologies Technology Bands Global coverage Range Battery needed Scan Device needed Cost of device Arch. Comment Barcode/QR-Code Optical Yes Centimeters No App scanning barcode required extremely low Pull Sticker needs to be hard to remove and visible, maybe embedded to the frame Play audio Non human hearable audio Yes Centimeters Yes App recording audio moderate Pull Button to play audio? NFC 13.56 Mhz Yes Centimeters No Yes extremely low Pull Privacy issues RFID Many Yes, but not on single band Centimeters to meters Yes Receiver required low Pull Many bands, specific readers needed Bluetooth LE 2.4 Ghz Yes Meters Yes Yes, but common low Pull/Push Competes with Wifi for spectrum ZigBee Multiple Yes, but not on single band Meters Yes Yes mid Push Not commonly deployed, software more involved 6LoWPAN Like ZigBee Like ZigBee Meters Yes Yes low Push Uses ZigBee physical layer and then IPv6. Requires 6LoWPAN to Internet translation GSM 800/900, 1800/1900 Almost besides South Korea, Japan, some islands Kilometers Yes No moderate Push Almost global coverage, direct communication with backend possible UMTS Many Less than GSM but South Korea, Japan Meters to Kilometers depends on usage Yes No high Push Higher power usage than GSM, higher device cost LTE Many Less than GSM Designed for kilometers Yes No high Push Expensive, higher power consumption NB-IOT (LTE) Many Not deployed Kilometers Yes No high Push Not deployed and coming in the future. Can embed GSM equally well into a LTE carrier Conclusion Both a push and pull architecture seem to be feasible and create different challenges and possibilities. A pull architecture will require at least Smartphone App support and maybe a custom receiver device. It will only work in regions with lots of users and making privacy/tracking more difficult is something to solve. For push technology using GSM is a good approach. If coverage in South Korea or Japan is required a mix of GSM/UMTS might be an option. NB-IOT seems nice but right now it is not deployed and it is not clear if a module will require less power than a GSM module. NB-IOT might only be in the interest of basestation vendors (the future will tell). Using GSM/UMTS brings its own set of problems on the device side but that is for other posts. [Less]
Posted 4 days ago
As part of running infrastructure it might make sense or be required to store logs of transactions. A good way might be to capture the raw unmodified network traffic. For our GSM backend this is what we (have) to do and I wrote a client that is ... [More] using libpcap to capture data and sends it to a central server for storing the trace. The system is rather simple and in production at various customers. The benefit of having a central server is having access to a lot of storage without granting too many systems and users access, central log rotation and compression, an easy way to grab all relevant traces and many more. Recently the topic of doing real-time processing of captured data came up. I wanted to add some kind of side-channel that distributes data to interested clients before writing it to the disk. E.g. one might analyze a RTP audio flow for packet loss, jitter, without actually storing the personal conversation. I didn’t create a custom protocol but decided to try ØMQ (Zeromq). It has many built-in strategies (publish / subscribe, round robin routing, pipeline, request / reply, proxying, …) for connecting distributed system. The framework abstracts DNS resolving, connect, re-connect and exposes very easy to build the standard message exchange patterns. I opted for the publish / subscribe pattern because the collector server (acting as publisher) does not care if anyone is consuming the events or data. The message I sent are quite simple as well. There are two kind of multi-part messages, one for events and one for data. A subscriber is able to easily filter for events or data and filter for a specific capture source. The support for Zeromq was added in two commits. The first one adds basic zeromq context/socket support and configuration and the second adds sending out the events and data in a fire and forget manner. And in a simple test set-up it seems to work just fine. Since moving to Amsterdam I try to attend more meetups. Recently I went to talk at the local Elasticsearch group and found out about packetbeat. It is program written in Go that is using a PCAP library to capture network traffic, has protocol decoders written in go to make IP re-assembly and decoding and will upload the extracted information to an instance of Elasticsearch.  In principle it is somewhere between my PCAP system and a distributed wireshark (without the same amount of protocol decoders). In our network we wouldn’t want the edge systems to directly talk to the Elasticsearch system and I wouldn’t want to run decoders as root (or at least with extended capabilities). As an exercise to learn a bit more about the Go language I tried to modify packetbeat to consume trace data from my new data interface. The result can be found here and I do understand (though I am still hooked on Smalltalk/Pharo) why a lot of people like Go. The built-in fetching of dependencies from github is very neat, the module and interface/implementation approach is easy to comprehend and powerful. The result of my work allows something like in the picture below. First we centralize traffic capturing at the pcap collector and then have packetbeat pick-up data, decode and forward for analysis into Elasticsearch. Let’s see if upstream is merging my changes.   [Less]
Posted 5 days ago
This is part of a series of blog posts about testing inside the OpenBSC/Osmocom project. In this post I am focusing on our usage of GNU autotest. The GNU autoconf ships with a not well known piece of software. It is called GNU autotest and we will ... [More] focus about it in this blog post. GNU autotest is a very simple framework/test runner. One needs to define a testsuite and this testsuite will launch test applications and record the exit code, stdout and stderr of the test application. It can diff the output with expected one and fail if it is not matching. Like any of the GNU autotools a log file is kept about the execution of each test. This tool can be nicely integrated with automake’s make check and make distcheck. This will execute the testsuite and in case of a test failure fail the build. The way we use it is also quite simple as well. We create a simple application inside the test/testname directory and most of the time just capture the output on stdout. Currently no unit-testing framework is used, instead a simple application is built that is mostly using OSMO_ASSERT to assert the expectations. In case of a failure the application will abort and print a backtrace. This means that in case of a failure the stdout will not not be as expected and the exit code will be wrong as well and the testcase will be marked as FAILED. The following will go through the details of enabling autotest in a project. Enabling GNU autotest The configure.ac file needs to get a line like this: AC_CONFIG_TESTDIR(tests). It needs to be put after the AC_INIT and AM_INIT_AUTOMAKE directives and make sure AC_OUTPUT lists tests/atlocal.  Integrating with the automake The next thing is to define a testsuite inside the tests/Makefile.am. This is some boilerplate code that creates the testsuite and makes sure it is invoked as part of the build process. # The `:;' works around a Bash 3.2 bug when the output is not writeable. $(srcdir)/package.m4: $(top_srcdir)/configure.ac :;{ echo '# Signature of the current package.' && echo 'm4_define([AT_PACKAGE_NAME],' && echo ' [$(PACKAGE_NAME)])' &&; echo 'm4_define([AT_PACKAGE_TARNAME],' && echo ' [$(PACKAGE_TARNAME)])' && echo 'm4_define([AT_PACKAGE_VERSION],' && echo ' [$(PACKAGE_VERSION)])' && echo 'm4_define([AT_PACKAGE_STRING],' && echo ' [$(PACKAGE_STRING)])' && echo 'm4_define([AT_PACKAGE_BUGREPORT],' && echo ' [$(PACKAGE_BUGREPORT)])'; echo 'm4_define([AT_PACKAGE_URL],' && echo ' [$(PACKAGE_URL)])'; } &>'$(srcdir)/package.m4' EXTRA_DIST = testsuite.at $(srcdir)/package.m4 $(TESTSUITE) TESTSUITE = $(srcdir)/testsuite DISTCLEANFILES = atconfig check-local: atconfig $(TESTSUITE) $(SHELL) '$(TESTSUITE)' $(TESTSUITEFLAGS) installcheck-local: atconfig $(TESTSUITE) $(SHELL) '$(TESTSUITE)' AUTOTEST_PATH='$(bindir)' $(TESTSUITEFLAGS) clean-local: test ! -f '$(TESTSUITE)' || $(SHELL) '$(TESTSUITE)' --clean AUTOM4TE = $(SHELL) $(top_srcdir)/missing --run autom4te AUTOTEST = $(AUTOM4TE) --language=autotest $(TESTSUITE): $(srcdir)/testsuite.at $(srcdir)/package.m4 $(AUTOTEST) -I '$(srcdir)' -o $@.tmp $@.at mv $@.tmp $@ Defining a testsuite The next part is to define which tests will be executed. One needs to create a testsuite.at file with content like the one below: AT_INIT AT_BANNER([Regression tests.]) AT_SETUP([gsm0408]) AT_KEYWORDS([gsm0408]) cat $abs_srcdir/gsm0408/gsm0408_test.ok > expout AT_CHECK([$abs_top_builddir/tests/gsm0408/gsm0408_test], [], [expout], [ignore]) AT_CLEANUP This will initialize the testsuite, create a banner. The lines between AT_SETUP and AT_CLEANUP represent one testcase. In there we are copying the expected output from the source directory into a file called expout and then inside the AT_CHECK directive we specify what to execute and what to do with the output. Executing a testsuite and dealing with failure The testsuite will be automatically executed as part of make check and make distcheck. It can also be manually executed by entering the test directory and executing the following. $ make testsuite make: `testsuite' is up to date. $ ./testsuite ## ---------------------------------- ## ## openbsc 0.13.0.60-1249 test suite. ## ## ---------------------------------- ## Regression tests. 1: gsm0408 ok 2: db ok 3: channel ok 4: mgcp ok 5: gprs ok 6: bsc-nat ok 7: bsc-nat-trie ok 8: si ok 9: abis ok ## ------------- ## ## Test results. ## ## ------------- ## All 9 tests were successful. In case of a failure the following information will be printed and can be inspected to understand why things went wrong. ... 2: db FAILED (testsuite.at:13) ... ## ------------- ## ## Test results. ## ## ------------- ## ERROR: All 9 tests were run, 1 failed unexpectedly. ## -------------------------- ## ## testsuite.log was created. ## ## -------------------------- ## Please send `tests/testsuite.log' and all information you think might help: To: Subject: [openbsc 0.13.0.60-1249] testsuite: 2 failed You may investigate any problem if you feel able to do so, in which case the test suite provides a good starting point. Its output may be found below `tests/testsuite.dir'. You can go to tests/testsuite.dir and have a look at the failing tests. For each failing test there will be one directory that contains a log file about the run and the output of the application. We are using GNU autotest in libosmocore, libosmo-abis, libosmo-sccp, OpenBSC, osmo-bts and cellmgr_ng. [Less]
Posted 5 days ago
Last year Jacob and me worked on the osmo-sgsn of OpenBSC. We have improved the stability and reliability of the system and moved it to the next level. By adding the GSUP interface we are able to connect it to our commercial grade Smalltalk MAP ... [More] stack and use it in the real world production GSM network. While working and manually testing this stack we have not used our osmo-pcu software but another proprietary IP based BTS, after all we didn’t want to debug the PCU issues right now. This year Jacob has taken over as a maintainer of the osmo-pcu, he started with a frequent crash fix (which was introduced due us understanding the specification on TBF re-use better but not the code), he has spent hours and hours reading the specification, studied the log output and has fixed defect after defect and then moved to features. We have tried the software at this years Camp and fixed another round of reliability issues. Some weeks ago I noticed that the proprietary IP based BTS has been moved from the desk into the shelf. In contrast to the proprietary BTS, issues has a real possibility to be resolved. It might take a long time, it might take one paying another entity to do it but in the end your system will run better. Free Software allows you to genuinely own and use the hardware you have bought! [Less]
Posted 5 days ago
Some Free Software projects have already moved to Github, some probably plan it and the Python project will move soon. I have not followed the reasons for why the Python project is moving but there is a long list of reasons to move to a platform ... [More] like github.com. They seem to have a good uptime, offer checkouts through ssh, git, http (good for corporate firewalls) and a subversion interface, they have integrated wiki and ticket management, the fork feature allows an upstream to discover what is being done to the software, the pull requests and the integration with third party providers is great. The last item allows many nice things, specially integrating with a ton of Continuous Integration tools (Travis, Semaphore, Circle, who knows). Not everything is great though. As a Free Software project one might decide that using proprietary javascript to develop and interact with a Free Software project is not acceptable, one might want to control the repository yourself and then people look for alternatives. At the Osmocom project we are using cgit, mailinglists, patchwork, trac, host our own jenkins and then mirror some of our repositories to github.com for easy access. Another is to find a platform like Github but that is Free and a lot of people look or point to gitlab.com. From a freedom point of view I think Gitlab is a lot worse than Github. They try to create the illusion that this is a Free Software alternative to Github.com, they offer to host your project but if you want to have the same features for self hosting you will notice that you fell for their marketing. Their website prominently states “Runs GitLab Enterprise” Edition. If you have a look at the feature comparison between the “Community Edition” (the Free Software project) and their open core additions (Enterprise edition) you will notice that many of the extra features are essential. So when deciding putting your project on github.com or gitlab.com the question is not between proprietary and Free Software but essentially between proprietary and proprietary and as such there is no difference. [Less]
Posted 5 days ago
The classic question in IT is to buy something existing or to build it from scratch. When wanting to buy an off the shelves HLR (that actually works) in most cases the customer will end up in a vendor lock-in: The vendor might enforce to run on a ... [More] hardware sold by your vendor. This might just be a dell box with a custom front, or really custom hardware in a custom chasis or even requiring you to put an entire rack. Either way you are trapped to a single supplier. It might come with a yearly license (or support fee) and on top of that might be dongled so after a reboot, the service might not start as the new license key has not been copied. The system might not export a configuration interface for what you want. Specially small MVNOs might have specific needs for roaming steering, multi IMSI and you will be sure to pay a premium for these features (even if they are off the shelves extensions). There might be a design flaw in the protocol and you would like to mitigate but the vendor will try to charge a premium from you because the vendor can. The alternative is to build a component from scratch and the initial progress will be great as the technology is more manageable than many years ago. You will test against the live SS7 network, maybe even encode messages by hand and things appear to work but only then the fun will start. How big is your test suite? Do you have tests for ITU Q787? How will you do load-balancing, database failover? How do you track failures and performance? For many engineering companies this is a bit over their head (one needs to know GSM MAP, need to know ITU SCCP, SIGTRAN, ASN1, TCAP). But there is a third way and it is available today. Look for a Free Software HLR and give it a try. Check which features are missing and which you want and develop them yourself or ask a company like sysmocom to implement them for you. Once you move the system into production maybe find a support agreement that allows the company to continuously improve the software and responds to you quickly. The benefits for anyone looking for a HLR are obvious: You can run the component on any Linux/FreeBSD system. On physical hardware, on virtualized hardware, together with other services, not with other services. You decide. The software will always be yours. Once you have a running system, there will be nothing (besides time_t overflowing) that has been designed to fail (no license key expires) Independence of a single supplier. You can build a local team to maintain the software, you can find another supplier to maintain it. Built for change. Having access to the source code enables you to modify it, with a Free Software license you are allowed to run your modified versions as well. The only danger is to make sure to not fall in the OpenCore trap surrounded by many OpenSource projects. Make sure that all you need is available in source and allows you to run modified copies. [Less]
Posted 5 days ago
Imagine you run a GSM network and you have multiple systems at the edge of your network that communicate with other systems. For debugging reasons you might want to collect traffic and then look at it to explore an issue or look at it systematically ... [More] to improve your network, your roaming traffic, etc. The first approach might be to run tcpdump on each of these systems, run it in a round-robin manner, compress the old traffic and then have a script that downloads/uploads it once a day to a central place. The issue is that each node needs to have enough disk space, you might not feel happy to keep old files on the edge or you just don’t know when is a good time to copy it. Another approach is to create an aggregation framework. A client will use libpcap to capture the traffic and then redirect it to a central server. The central server will then store the traffic and might rotate based on size or age of the file. Old files can then be compressed and removed. I created the osmo-pcap tool many years ago and have recently fixed a 64bit PCAP header issue (the timeval in the header is 32bit), collection of jumbo frames and now updated the README.md file of the project and created packages for Debian, Ubuntu, CentOS, OpenSUSE, SLES and I made sure that it can be compiled and use on FreeBSD10 as well. If you are using or decided not to use this software I would be very happy to hear about it. [Less]
Posted 5 days ago
Berlin continues to gain a lot of popularity, culturally and culinarily it is an awesome place and besides increasing rents it still remains more affordable than other cities. In terms of economy Berlin attracts new companies and branches/offices as ... [More] well. At the same time I felt the itch and it was time to leave my home town once again. In the end I settled for the bicycle friendly (and sometimes sunny) city of Amsterdam. My main interest remains building reliable systems with Smalltalk, C/C++, Qt and learn new technology (Tensorflow? Rust? ElasticSearch, Mongo, UUCP) and talk about GSM (SCCP, SIGTRAN, TCAP, ROS, MAP, Diameter, GTP) or get re-exposed to WebKit/Blink. If you are in Amsterdam or if you know people or companies I am happy to meet and make new contacts. [Less]
Posted 5 days ago
In the past I have written about my usage of Tufao and Qt to build REST services. This time I am writing about my experience of using the TreeFrog framework to build a full web application. You might wonder why one would want to build such a thing ... [More] in a statically and compiled language instead of something more dynamic. There are a few reasons for it: Performance: The application is intended to run on our sysmoBTS GSM Basestation (TI Davinci DM644x). By modern standards it is a very low-end SoC (ARMv5te instruction set, single core, etc, low amount of RAM) and at the same time still perfectly fine to run a GSM network. Interface: For GSM we have various libraries with a C programming interface and they are easy to consume from C++. Compilation/Distribution: By (cross-)building the application there is  a “single” executable and we don’t have the dependency mess of Ruby. The second decision was to not use Tufao and search for a framework that has user management and a template/rendering/canvas engine built-in. At the Chaos Computer Camp in 2007 I remember to have heard a conversation of “Qt” for the Web (Wt, C++ Web Toolkit) and this was the first framework I looked at. It seems like a fine project/product but interfacing with Qt seemed like an after thought. I continued to look and ended up finding and trying the TreeFrog framework. I am really surprised how long this project exists without having heard about it. It is using/built on top of Qt, uses QtSQL for the ORM mapping, QMetaObject for dispatching to controllers and the template engine and resembles Ruby on Rails a lot. It has two template engines, routing of URLs to controllers/slots, one can embed any C++ in the template. The documentation is complete and by using the search on the website I found everything I was searching for my “advanced” topics. Because of my own stupidity I ended up single stepping through the code and a Qt coder should feel right at home. My favorite features: tspawn model TableName will autogenerate (and update) a C++ model based on the table in the database. The updating is working as well. The application builds a libmodel.so, libhelper.so (I removed that) and libcontroller.so. When using the -r option of the application the application will respawn itself. At first I thought I would not like it but it improves round trip times. C++ in the template. The ERB template is parsed and a C++ class will be generated and the ::toString() method will generate the HTML code. So in case something is going wrong, it is very easy to inspect. If you are currently using Ruby on Rails, Django but would like to do it with C++, have a look at TreeFrog. I really like it so far. [Less]
Posted 5 days ago
As part of the Osmocom.org software development we have a Jenkins set-up that is executing unit and system tests. For OpenBSC we will compile the software, then execute the unit tests and finally run a bunch of system tests. The system tests will ... [More] verify making configuration changes through the telnet interface, the machine control interface, might try to connect to other parts, etc. In the past this was executed after a committer had pushed his changes to the repository and the build time didn’t matter. As part of the move to the Gerrit code review we execute them before and this means that people might need to wait for the result… (and waiting for a computer shouldn’t be necessary these days). sysmocom is renting a dedicated build machine to speed-up compilation and I have looked at how to execute the system tests in parallel. The issue is that during a system test we bind to ports on localhost and that means we can not have two test runs at the same time. I decided to use the Linux network namespace support and opted for using docker to achieve it. There are some hick-ups but in general it is a great step forward. Using a statement like the following we execute our CI script in a clean environment. $ docker run –rm=true -e HOME=/build -w /build -i -u build -v $PWD:/build osmocom:amd64 /build/contrib/jenkins.sh As part of the OpenBSC build we are re-building dependencies and thanks to building in the virtual /build directory we can look at archiving libosmocore/libosmo-sccp/libosmo-abis and not rebuild it all the time. [Less]