Start of the 3D printing age     Wed Sep 22 2021
We've been experimenting with IOT edge protocols and sensors over the past months and one thing quickly became a problem: the lack of enclosures for our custom devices.

When you are putting together a Raspberry Pi with a light & humidity sensor, a Lora WAN antenna, 2 rechargeable batteries and some 5G connectivity, then there aren't a lot of cases out there for your prototype.


Say hello to "Porcu-Pi", a beast of connectivity

So when we ordered the TOFU board from Swiss company Oratek (@Oratek: the integrated SIM card is a game-changer, thanks!), they offered a case with their board. The options were either to pay CHF 35 to receive the case by post, or pay 8 CHF for the 3D model digital files only. Sense6 has a 3D printer and we selected the second option.

So there it is: the revolutionary benefits of 3D printing delivered to us for the first time and without fanfare. Within a few hours we had the parts printed and assembled: no need for shipping, no need for delays, and completely custom to our requirements. We modified the 3D model by making the enclosure higher to accommodate the additional battery pack and 5G antennas.

Perhaps the most interesting realization was that the process felt simple and familiar. As if it had been around for the past 10 years.
"First mile" data extraction with LoraWAN     Fri Aug 6 2021
For real-world AI applications, your ability to access more relevant data is just as important as your ability to deploy sophisticated machine-learning models. That's why at Sense6, we also specialize in some IOT-related technologies (MQTT, Wireguard, deploying cool sensors). So what does this have to do with LoraWAN?

The situation: a lot of use-cases need to react in real-time to sensor data. To do that, you need to quickly get the data out of the sensors and onto a secured network in order to analyze them. If your sensors are indoors and have access to WiFi, or they're on some fancy robot and you don't mind adding a GSM cellular connection (and have the money and electric power to spare), then maybe LoraWAN is not for you. But in other cases things are not so simple: It's hard to get the data out of the sensors, what we mean by the "first mile".

The problem: imagine a logistics company wanting to monitor shipping containers, or a farmer wanting to survey crops in open fields, or a company wanting to monitor its office room temperatures but without the resources to add internet everywhere.
In these cases, it's not practical to use traditional communication protocols, usually for 3 reasons:
  • Too costly: wiring up a warehouse or an office with Ethernet requires significant investment
  • Low reach of WiFi: while WiFi would be great, its range is a few dozen metres indoors and maybe a couple hundred metres outdoors. Our use-cases require hundreds of metres indoors and kilometres outdoors
  • Power constraints: most of these nodes need to run outdoors and therefore on battery. We can't be recharging batteries every week, nodes must be power efficient

A solution: LoraWAN. In a nutshell, LoraWAN enables you to connect cheap sender- and receiver-devices across many kilometres outdoors and hundreds of metres indoors. These Lora devices are cheap and can run for many months on a single battery charge (we didn't test this yet). There is a trade-off however, you are very restricted in the amount of data you can send. Forget sending images or even long texts, best is to restrict Lora messages to a few numbers. Tough, but not a showstopper: a heat sensor sensing a fire doesn't need to send a newspaper article, 1 bit could suffice (fire = 1 or something like that).

It's not glamorous, but we've got a strong signal here... and we think that's amazing.

Our first experiment: we tested prototype #1 during a storm in the countryside: strong rain and howling winds (not very apparent in the photo) couldn't keep us indoors. Right away the "sender" node (half the size of a credit card) built a strong connection with a "receiver gateway" (size of a pocket book) more than 1 kilometre away despite the receiver being indoors and a few trees and houses separating us. So cool!

We'll be testing the stability of these devices in the months to come. I think you can tell we're excited about this technology, it opens more doors for powerful AI applications.
Show don't tell #1     Sun Mar 21 2021
To follow this demo, you'll need both a computer screen and your phone. Keep your computer on this page. Take out your phone, start the camera app and point the camera to the QR code to left. After a few seconds a link should appear, click on it to load the augmented reality demo on your phone (preferably using Chrome browser).



If for some reason the below instructions don't work on your phone, here's a video showing what should have happened ;)






Point your phone camera to this first marker, it should automatically start a video overlayed on top of the marker. We can imagine using this in marketing (create engaging advertisement) or in construction (visual explanations for complex machinery).

It's also interesting to think that different videos can be started depending on the context, for example overriding with an evacuation video in case of a fire.

(Credits to Blender for the video used)








This second marker demonstrates the ability to display dynamic data, calculated on the spot or read from a database or sensor.

A button should also appear at the bottom of the screen that can be used to reference further relevant content, for example user manuals or detailed reports.








This last marker shows that you can trigger other apps. If you press on button that appears at the bottom of your screen, it should open your maps application and point you to ETH Zurich. Admittedly this is a basic interaction, but more complex ones are possible via REST-ful APIs.

Also note that the marker images don't have to be blobs of color or black-and-white boxes, they can contain text.


We're excited about this technology and are busy developing our first commercial applications. More coming soon. Please get in touch if you want a more detailed demonstration.
The urgency of (some) cloud computing     Tue Feb 2 2021
A great many articles have been written on innovation. Rather than contribute to the overall body of literature, we'd like to focus on one important enabler: access (or lack thereof) to cloud computing resources.

Why nitpick on such a specific topic? Because we don't see it mentioned enough and time and time again it's one of the greatest hurdles we face, especially in the financial services industry.

If we segment companies into 2 distinct groups - the "haves" and "have-nots" of cloud computing - we observe drastically different expectations with respect to the speed of IT innovation.

When we ask a "haves" company (approx. 30% those we know) if they can give us access to a prototyping environment, they'll usually shrug and say "Sure. Is tomorrow OK? How much RAM and storage space do you need?" And that's great, and we can start the next day.

When we ask the same question to a "have nots" company, the response is a cold icy stare followed by something like "Sure, why don't we schedule a meeting in 10 years to finalize the details, it will likely cost 20,000$, and we first need to get this huge bureaucratic process out of the way". And that's not great, and we're basically stuck.

We should back-up here and explain why cloud computing is important. In a nutshell it provides the necessary infrastructure for innovation: the aforementioned rapid prototyping environment.
IT innovation doesn't happen on end-user applications such as Microsoft Excel or PowerPoint. You need server technologies to build 24/7 responsive services, sustainable data repositories, intuitive web-interfaces, embedded workflows. Prototyping servers are no longer provided as physical machines that you store in office corners, rather they are virtual and served from a company's cloud.

Imagine an application that is able to receive an address list via email, corrects & validates these addresses using a Swiss address API and finally generates an output file ready for back-end lead-management systems. This is a typical use-case for a short innovation cycle (one we recently completed), impossible to run on a laptop but takes 5 minutes to deploy on a virtual server (or container).

Why not just use a public cloud provider such as AWS if the company isn't ready? Financial services companies are rightfully restrictive about their customer data, therefore careful design needs to be applied to set-up a cloud that enables secure virtual environments before it can be used for prototyping.

We're not advocating for a complete shift to cloud computing (not required), nor what the best mix of public versus private cloud is (it depends), nor the best technology to get there (we like Kubernetes/Docker, there are alternatives). What we are advocating for is that any bank or insurance should be able, within days, to provide a secured virtual environment where its teams can prototype and experiment. Cloud computing technologies have drastically matured over the last 10 years (fueled by the rise of AWS, Google Cloud and many others). Configuring a functional and secure private/public cloud is within the reach of any company.

Cloud computing gives you the means to prototype, to experiment, to innovate in a secured environment. Please, if you are a company where securing cloud resources takes more than 2 weeks: someone in IT needs to tackle this head-on. Competitors are moving, customer expectations are rising, and Excel can only get you so far. If you need help getting organized and setting things up, please contact us.
Boosting search result accuracy with ML     Thu Dec 3 2020
We've been super-busy so sorry for the low number of blog posts. Here's another tech one lifted from production: we'll keep it short and sweet.

Say you've got 800'000 recipes stored in a database in pure text (no structured data) and you want to automate a workflow that, based on a customer-submitted description, pulls out exactly the right recipe (or the best top 10 recipes). Database text search tools aren't designed to be accurate for "fuzzy" queries on large data volumes. Traditional keyword approaches won't help either: how would you know which keywords to focus on? Also to make things even more interesting: some of the 800'000 recipes are updated, removed or added every month so static mappings won't help, and some recipes use the metric system whereas others the imperial system.

Enter machine learning, more specifically LSTM models. You define 40 recipe categories (soups, deserts, salads, etc.) and sort a few thousand recipes into these categories. Using this data you can train an LSTM model to predict the category of a recipe based on its text description... and the same LSTM can help you classify what kind of category a user is looking for. So when a user enters a query, you first identify the recipe category and then focus your search only on those categories. In our example and assuming recipes are uniformly allocated, this would reduce our query 20'000 recipes, much easier than before.

There are further steps to the algorithm (think "word2vec" to identify the best keywords) but we'll stop here: it already illustrates how including machine learning at distinct points in an automation process has a significant impact. The above use-case is something we are dealing with at one of our clients (not with recipes) and the high accuracy we achieve wouldn't be possible without machine learning.
Triangulating data with machine learning     Wed Oct 7 2020
We’ve had a little breakthrough at Sense6 recently, solving a problem which was taking our attention for a few months. Ready for a small technical rant? We’ll keep things as simple as possible and start with an example.

Suppose you’re developing your own OCR engine and have trained 3 neural networks with different specializations. The first is trained for dealing with black-and-white documents, the second for very colorful documents, the third is a jack-of-all-trades performing reasonably well whatever the situation.

Let’s further assume that the best overall performance is from the “jack of all trades” with an accuracy of 75%. However in the 25% cases it gets wrong, you’ve observed that at least one of the specialized models gets the right result. If you were able to always select the best result from your 3 models, your accuracy would skyrocket.

Programming static rules atop 3 neural networks is a recipe for disaster. Therefore you decide to create a second type of neural network whose goal is to triangulate the best result from the output of the 3 underlying models. You set everything up and excitedly run your first test-cases, waiting for accuracy bliss and … nothing happens. Still stuck at 75%. Ensue weeks of debugging 2 layers of neural networks, an activity best described as searching for a needle in a haystack... in the dark... with mittens on.

We’ll pause here to let the reader wonder about the solution for a few seconds.



There’s an important step whenever training classification models - “class imbalance” - a step we were taking into account for the first layer models but had forgotten in the second layer, mistakenly thinking “it had already been dealt with”. What is it? Let’s have another example.

Suppose you wanted to train a neural network to “turn left” or “turn right”, but 99% of your training data pertained to a “turn left” scenario. Your training data is “imbalanced” and if you trained a model without any changes, it would quickly learn that the best strategy is to simply turn left all the time: 1% error rate isn’t bad. BTW, this is typical when working on “fraud” use-cases where fraudulent banking transactions or fraudulent insurance claims are a tiny fraction of the total data volume.

To obtain the correct behavior you need to rebalance your data, i.e. transform your training data until “turn left” versus “turn right” are about 50:50. There are multiple ways to achieve this and this is a separate topic altogether; a basic approach is to randomly delete unwanted data until you reach the desired equilibrium.

So back to our original OCR example: what was happening? Well, the triangulation model had learned that one of the underlying OCR models had a better performance and that the best strategy was simply to stick to it and ignore the others. To remedy this, all you need to do is to rebalance the data, thereby removing the incentive to always default to the “jack of all trades” output (33% accuracy from a rebalanced scenario isn’t that great), and hey presto: performance improvement.

The more experienced of you could point out: "Why bother with 2 layers of neural network for OCR? Why not just create a bigger deeper neural network and let it deal automatically with jack-of-all-trades versus specialized internal configurations?”
That is a valid criticism, and that’s why OCR is a simplistic example. However there are use-cases where (we think) a second neural network layer makes sense, mostly because they have access to more data (features) than were available in a “first run” prediction. For example:
  • Data extraction from financial statements, where the relationship between financial metrics can be codified as features in a second layer (e.g. total assets = liquid assets + non-liquid assets)
  • Decision-making in self-driving cars, where the master model receives as input the predictions from the various car sensors (layer 1 neural networks). We’re less sure of this one though, having never worked on self-driving neural networks

In closing, every time we describe something technical we seem to go on a rant and write long posts. Apologies for this, we’re trying to break down complex problems into small digestible parts, unfortunately that takes up space.
Two charts on Covid-19     Thu Aug 27 2020
We love data at Sense6 and, now that a few months have passed in the "Covid-19 world", we thought we'd share to charts on what we think are interesting observations. Unfortunately we only have data on Switzerland, so this is a local country analysis.

Chart 1: Overview Covid-19 # of deaths versus # of positive test cases per day



To build this slide we visited bag.admin.ch (almost) every day and copied the key stats (# of deaths, # of positive test cases) into our own Excel file. We opted for the 7-day average approach to reduce noise and also replicate the approach used by NYTimes in order to compare results.

Chart 2: Net growth of A.G. and GmbH companies in Switzerland (Jan-Aug 2020 vs 2019)



To build this slide, we leveraged Cloudlink and filtered only the A.G. and Gmbh companies from the federal publications. We could have gone deeper in the analysis, for example looking at differences in regions or activity sector, but the population size could then become too small for meaningful conclusions.

What do you think? Do you agree with our observations and hypotheses? Do let us know if you have some feedback via email, we're happy to continue this conversation.
Thoughts on the Pareto Principle     Tue Jul 21 2020
Many have heard of the Pareto principle - otherwise known as the 80:20 rule - but most only remember the first part of the principle: "you can achieve roughly 80% of the effect by focusing on 20% of the causes".

To be fair Pareto is not a rule and more a guiding principle. While not always accurate, it does show up in a variety of settings: a prototype provides 80% of the benefits, focus on a couple of customer segments yields 80% of the profit, 20% of your apartment is where you spend 80% of your time.

We do think that the second part is just as interesting as the first: "completing the last mile, the remaining 20%, requires 80% of the work". This is what separates a pilot from an end-to-end solution, efficiency gains from an automated process, being top 10 from reaching number 1.
First steps with big data     Mon Jun 29 2020
Several years ago, companies weren't very diligent in digitizing their data, and therefore most commercial IT systems would generate data extracts with thousands of entries only. This has changed and we've witnessed more and more people running into big data issues. Not just geeks, but consultants, project managers, and auditors; individuals not trained as data scientists.

We were reminded of this during a recent engagement where we used Cloud Link to analyze significant swaths of the Swiss population. We combined, filtered and scored millions of text and numeric data entries, a task impossible for Excel to achieve.

Now don't get us wrong: we love Excel and use it a lot. However, if Excel is all you know and you have a few minutes to spare, then in this post we'd like to invite you to experiment something different: R.
We'll focus on a few basic but useful operations: opening a data file, doing some exploratory stats, filtering relevant entries, storing these entries into a separate smaller file.

Why should you join in? R is orders of magnitude faster than Excel when working with big data sets and chances are you'll run into this challenge sooner than you think. It's a good investment of your time.

Step 1: Download and install R with this public link (it's free, how great is that). If you want to use your work-computer but don't have installation rights then don't worry because you should still be OK. Most companies include R in their accepted applications so you can order it via your company's standard procurement process.

Step 2 (optional): Install R-Studio which provides a more intuitive programming environment than the R console does. You don't need to for this tutorial though.

Step 3: Create a folder on your Desktop called "R_Experiment" and store in it this example data file. It's a CSV (Comma Separated Values) file, one of the main standard formats by which large data sets are shared. Values are usually separated by commas or tabs, the latter being the case in this example. CSV's are human readable so you can open the file with any text editor (like Notepad) if you want to look at the 8 entries in our example.

Step 4: Great! All set to start. Launch R (or R-Studio) and type into the R console the "set working directory" command:
setwd("C:\Users\YourUserName\Desktop\R_Experiment")
With this command you are asking R to work from the folder you previously created. Note that your path will look different if you are on Mac or Linux.

Step 5: Now that you are in the right folder, you can read the file into R as so:
mydata <- read.csv("example.csv", sep="\t")
Variable "mydata" will hold the entire table of data from the CSV file, it's type is a data-frame. This is also a good place to get used to the "<-" notation in R, which you can simply imagine as "=". Finally the above command specifies the value-separator as tab ("\t" denotes a tab). You can change the separator to include any character you want, for example sep=","

Step 6: OK, so you've successfully loaded the contents from the CSV and stored them into variable "mydata". Let's quickly check by printing the first 2 rows:
head(mydata, 2)
Let's go a bit further and ask for some exploratory statistics:
summary(mydata)
Did you see that R automatically recognized which columns contain text versus numbers and applied the relevant statistics based on each case? Pretty cool.

Step 7: Now for the most complex command of this tutorial, we're going to filter all entries where the department is "IT":
filterdata <- mydata[mydata$dept == "IT", ]
Let's break it down.
First, consider the left side of the command "filterdata <- ". This simply means that, whatever happens on the right side, we're going to store it into a variable called "filterdata".
The rest of the command tells R to select data-frame "mydata" and filter out all rows where mydata$dept equals "IT". The $ sign is how you refer to a column within a data-frame, so "mydata$dept" means column "dept" from data-frame "mydata". Also the "," followed by nothing means that we don't want to apply a second filter on the columns: we want the full row returned where dept == "IT".
It's a bit much, we know, but you'll quickly get used to this notation. It's repetitive and there are tons of tutorials out there to help.

Step 8 (final): All that's left is to store the filtered data into a new file. Execute:
write.table(filterdata, file="filtered.csv", sep="\t", row.names = FALSE)
This command tells R to write the data-frame stored in variable "filterdata" into a file called "filtered.csv". This file will be stored to the working directory by default. Finally it requests the output file to use tabs as separators (annotated with sep="\t") and disables the row.names option, else R will start each row with a number which is not something we need.

Congrats, we're done! The powerful thing is that this approach will still work if you process a file with millions of entries. You can now open it, explore the data, filter out the relevant entries and store them in a separate file so that you can continue the analysis in a tool you are more familiar with, like Excel.
ML Chicken and egg     Tue May 26 2020
Well annotated data is a rare thing when working in real-world machine-learning projects. So how can developers still use machine-learning without available training data to start? For example let's say you're building a machine learning model that classifies companies based on their text description (something that we're building at Sense6). The challenge is that no data-set exists for the classification that you have in mind.

A pragmatic way to tackle this challenge - we think - is by setting up 3 components:
  • A basic algorithm to kick-start the classification: in our company classification example it could be something as basic as a keyword ranking algorithm, or alternatively something slightly more sophisticated like tf-idf
  • A feedback loop: a combination of process and technology that enables end-users to review and update the classification that your model is generating
  • A machine-learning model: in our example we'd recommend using an LSTM which are simply magical in "divining" the right classifications

So how does this work? The motto for your basic algorithm is "good enough": as soon as its output is a usable starting-point for your end-users, then move on.
The motto for the feedback loop is "smooth and beautiful". It's a vital step and one that often doesn't get enough attention. One unnecessary click, one slow load time, one unnecessary text field: any of these can mean the difference between abundant feedback or nothing at all. Enhance the feedback mechanism until it's as seamless and easy as possible for your end-users.
Finally the motto for your machine-learning model is "retrain, retrain, retrain".

Very soon you'll observe that the machine-learning model picks up where the basic model stopped and becomes orders of magnitude more precise, assuming you are gathering regular and quality feedback from your end-users via your feedback loop.

There are other ways around this chicken-and-egg problem (for example deep reinforcement learning) however they are often more complex and not always applicable in real-world situations. We've found that the above approach is pragmatic and rapidly leads to performing models.

Please contact us if you have alternative ways of dealing with the chicken-and-egg problem, or if you have a similar situation that you'd like us to take a look at.
When you give to the community     Thu May 7 2020
We didn't expect anything in return when we started coronaschool.ch at the start of the Covid-19 crisis. The goal was to spend one week-end and hack together a handful of web-applications that would help families with homeschooling. Word-of-mouth spread and, within a couple of weeks, the website was receiving requests in the low hundreds every day. We were happy that others were using it and didn't expect the learnings that followed.

1. Back-end lessons learned:
Corona School is built using Fargate from Amazon: an infrastructure-as-a-service set-up that we'd been eyeing for some time. It turns out that traffic to coronaschool.ch is cyclical: according to our logs most parents ask their children to do math from 9AM to 2PM. This taught us a few tricks about load-balancing and how resilient Amazon Fargate is. We're happy with its performance and cost, and we'll re-use it next time we can.

2. Front-end lessons learned:
Mike Tyson has a quote about everyone having a plan until they get punched in the face. I'll paraphrase and say that every GUI developer thinks they're pretty good until their GUI lands in the hands of a bunch of five-to-ten year-olds. I've never seen a GUI break-down that fast, repeatedly. Our young users are still teaching us lessons and we'll make sure to integrate the improvements into our normal front-end.

The most valuable lesson has been the unexpected benefits of opening up parts of our technology to the public. Progress was much faster than if we'd tested internally for the same amount of time, something we'll keep in mind going forward.
Hello World !     Mon May 4 2020
Most virtual explorations start with a "hello world". This applies when you are exploring a new platform, a new programming language, a new database, or in some cases even new APIs.

Let's say that you want to learn to program in Python but have no idea how to begin. A Google search for "hello world python" will give you a good place to start, for example here.

We think it's appropriate to start the Sense6 blog with a "hello world" of our own.
Powered by Sense6