Home Webinars Privacy-Friendly Advertising Panels Powered by AI
Privacy-Friendly Advertising Panels Powered by AI
Speakers
Smart advertising screens are becoming the new standard for modern smart cities. Equipped with cameras and sensors, advertisers can customize and refresh content on the fly, while engaging and targeting audiences like never before.
Join Soracom, Edge Impulse, and Seeed Studio to learn how to build privacy-friendly advertising panels including:
✅ Training a machine learning model to detect faces in real-time.
✅ Running web applications locally on the Seeed reComputer, powered by NVIDIA’s Jetson platform.
✅ Enabling AIoT by connecting any device to the cloud over cellular with Soracom’s Onyx LTE modem.
✅ Counting people and measuring the time of exposure.
✅ Automatically refreshing the displays while deploying “on-device anonymization” techniques using edge machine learning.
Watch It Now!
All right welcome everyone I am super happy to be here with you today. So the topic of today will be privacy friendly advertising panel and we will be talking about on device anonymization. So advertising screens installed in public spaces equipped with cameras and other sensors are becoming more and more popular. They offer new ways of interacting with the panel while gathering useful metrics for advertisers. They allow them to count the number of passages made in front of the panel, measure the time of exposure, and automatically change the advertising screen based on the number of people that are passing by. However, many passersby do not wish to be filmed in public spaces, which is completely understandable. So this is why several companies have started to work on on device anonymization techniques using edge machine learning. So for this workshop, Sead Studios, Horicom and Edge Impulse have partnered to build this tutorial and to show you how to build a privacy friendly advertising panel, including how to train a neural network using Edge Impulse, how to build a web application locally running on the Seeed’s recomputer Jetson, and then how to forward the inference results including the blur or the anonymized image using Soracom. To get started with, I will pass on the mic to Helene. Helene is the Global Marketing Manager focusing on AGI and partnership at Seeed Studio. Thanks, Elaine, for being here and I’ll let you start. Yeah, yeah, thank you, thank you, Louis. I’m so happy to join this session with Louis and Nicholas to talk about building up this privacy friendly advertisement panel. I think that is, this is a very good opportunity to boost the sales for the retailers. And that will be like, like, bring up the Google Ads to the on-site of the retailer stores. Yeah. And also, let me share the screen to this and and this session also using our recomputer object Jetson. So actually, the recomputer Jetson series is the whole product line to build up with NVIDIA, Jetson SOMs, with the Jetson. Nano, ZDNNX, and also the RNNX, the production module, and see and also with the seeds, the enclosure, the housing, and also our the carrier boards. So on the top of the AI performance by the RNNx, we’re still working on that. It’s to release very soon. So that is going to deliver up to the one hundred tops, the AI, the performance. And also, you can choose the nano, the severe nano to tolerate to the different needs. So a computer that is the small the edge box can fit into everywhere, the carrier board, size, actually, right now is they’re nearly the same as the NVIDIA, the official dev kit of the nano and the NGOX. And also, which is more special, that is preinstall the Jetpack, which means that it’s more ready for the development and also the deployment that supports the Jetson, the software, and the leading AI framework, and also the Edge Impulse. Because the Edge Impulse is fully support the embedded Jetson, you can directly add on a microphone and the camera to and also seamlessly to build up a customer and a reliable model at the Edge Impulse Studio. And especially, Louis going to show the example, like building up the formal model that’s quite fast and good inferencing at the edge. And also the recomputer that comes with a rich set of the IOs with the gigabyte Ethernet, USB three. So today, Louis is to show us it’s a ten twenty. It comes with the full USB three and also the M. TwoKE to powering the five gs or Bluetooth or the other communication modules, and also the M. Two key M on the back, and it can extend the storage with SSD. And also the four d ping, the GPIO to extend more possibilities. And furthermore, Seeed also will provide further the customization on there that you can pre configure the software and also the hardware, the IO customization, the service. Yeah. So on the right, there is also a webinar I have worked walked through the NVIDIA, the GTC, the March GTC this year. That is presented by the cooler screen. That is just they enable the customer a personalized advertisement at the grocery store in front of the cooler. So I think when we talk about the retail industry applications or the applications related to the face detection, actually, for the privacy, it’s always the priority we are thinking about. And also the connectivity, the security, that is all the main points that our customers are caring about when they want to bring the whole device pre installed the AI to edge. So I’m quite excited about this session. And also the the SORACOM LTE, and they provide the secure and reliable, the twenty fourseven connectivity. Yeah. So I will give my screen to Louis and to explain, to Nicholas, and Louis and to go further on this step by step demo. Awesome, thanks a lot Elaine. Then we are now with Nicolas DeVo and Nicolas Lesconique. I think Nicolas Lesconique you are going to say a few words. So Nicolas DeVo is a key account manager at SORACOM and Nicolas Lesconnec is a strategic partnership manager at SORACOM as well and we have actually shared a lot of things together with Nicolas. We used to work together about six years ago now, we’re getting older, when we were working at Sigfox, an IoT company. Nicolas was actually my first manager when I was working at Sigfox and he’s been a great manager and I’m really grateful that we are working on this use case together again. Thanks Nicolas and I’m passing on the mic to you. Thanks Louis. Yes, so I knew Louis when he was a kid back then and very happy to be here today for this webinar with our partners Edge Impulse and Seeed. So to give you a few words about who we are and what we provide at Soracom in the context of this session, Basically, our role is to provide connectivity and connectivity services. So today’s session will mostly focus on helping you create something. That’s not what we do. What we do is once you found, you detected events, you extracted the relevant information you want to upload to your cloud systems, we enable you to connect and to transmit those data efficiently. So basically what we provide. So connectivity with full MVNO capability, platform services mostly upstream, be it at the networking level, application level with device management, provisioning, cloud functions if you need to compute your data on the fly and some interface services that we’ll see at the end of today’s demo, mainly making sure that you are able very quickly to store and visualize the data that you are extracting from your application. So basically, once you found something clever or insightful that you may manage to extract from the environment, let’s say by example using C Studio hardware and edge support software solution, we will enable you to transmit that efficiently and do something with those data on your cloud applications. Don’t worry, I don’t have too many slides, Louis. I’ll do it fast. So basically you have your things, whatever communication they will use. So you may have noticed the SIM card in my previous slide. So we provide cellular connectivity, be it three gs when available, LTE, LTM, NB IoT, depending on the network availability all over the world. But we also provide Sigfox connectivity and broadband connectivity for anything using IP based protocol so that you can forward into the same data ingestion pipeline and use the same Soracom tools, store them together use our visualization tool that you’ll see a bit later on. And maybe a couple of words. So today is about privacy friendly advertising. That’s not exactly what I’m showing here. But in some applications beyond privacy, what you’ll need at the network level is making sure that you have private exchanges and that your data or your customer data does not go through the public internet. And that’s part of what we can easily provide to customers, the ability to build fully private connections from device to cloud and the other way around, making sure that your application is first privacy friendly but your data is also properly secured along the way and providing you top notch security from device to cloud. So that’s it about Soracom. Of course, if you want to learn more about the details on connectivity services, which are not the focus of today’s session, or are willing to become a partner or a customer of Soracom, be more than happy to have a direct chat with you. So we’ve got my contact details here and otherwise you can just use the discussion window that you have with you. Thanks and looking forward to the session, Louis. Thank you. Thanks a lot, Nicolas. All right. So I will get the screen share. You need to unshare your screen first, I think, Nicolas. And it should be okay now, right? Great. Yeah. Can I do that? Oh, I don’t. Okay. I’m the only one who okay. Let me stop and try to reshare. Okay, I think that should work. Yep, we can see your screen now Louis. Okay, great, thanks. All right, so for the agenda of today I will first go through different image processing approaches, because when we need to anonymize the image we need to go through a computer vision approach. Then I will show you based on the one we’ve chosen which is called FOMO which stands for Faster Objects, More Objects will go through that in a minute, how you can build your own machine learning model. It’s a custom one, meaning you build it from scratch. Then I will show you how to extract the machine learning model that we built in the cloud and to take it and build an application that runs on the edge device, in our case on the Seeed’s recomputer Jetson, and then I will show you how to forward the metrics with SORACOM including one people counter and the anonymized image. So we are going to use three tools, mostly Edge Impulse Studio, then the Seeed’s recomputer to host a web page, and finally we are going to use a set of tools provided by SORACOM to forward the inference. I’ve got only one slide about Edge Impulse, who we are. The goal of Edge Impulse is to go to market faster for you to build embedded machine learning solutions and we cover everything from the data collection to the impulse design. The impulse is a bit of a mix between the digital signal processing and the machine learning blocks and when you mix that together you can build efficient embedded machine learning models. We also provide tools for you to test the machine learning models and a wide variety of solutions to deploy them on edge devices whether they are MCU based like ARMv7 like the Jetson and other solutions. And then keep in mind that a machine learning loop, machine learning op, it’s always a loop so you go over and you iterate over time and this is how your machine learning model gets better. So different image processing approaches. The first and the most classical one is image classification. In that case, the model tried to answer a question which is here in that case, is there a face or not in the image? Here it’s a binary classifier, one of the most common use cases is it a dog or a cat. So it provides interesting information but in our case you can only not send the picture which contains a person or a face. This is not great for our use case. So that’s image classification. Then we have object detection using bounding boxes. This should be perfect because it provides both, if there is a face, the number of faces and especially the size of it. However, these models are extremely slow on edge devices. And then we have another approach which is object detection using centroids and here the question the model is trying to answer is are there faces in the image and where they are. So we don’t really care about the size in this case. We noticed with several customers that the size of the object in the image is not always as important as it seems to be in the first place. And what’s behind this kind of model is FOMO. It’s a brand new approach that we developed internally, especially Matt Kelsey, he’s based in Australia, he’s an ex Googler and he’s super brilliant, he’s smart. So he came up with one idea. He wanted to be able to detect objects on MCUs. Detect objects on MCUs is super hard so it will obviously work on a Jetson based computer. So what’s basically behind the technique? I’m not going to spend too much time on the architecture because it can be quite complex, but we take the MobileNetV2 based architecture and we’re going to use a pre trained transfer learning, so we will keep some of the weights. So in our case we cut the MobileNet’s V2 architecture, keeping only the first weights, and we’re going to retrain the latest layers using our own data that we collected. So again, what’s the technique behind that is not the topic of that session but just for your understanding, when we take an image we actually divide it by eight and we obtain a grid or a feature map and at the moment the division by eight is by default, you will be able to modify that in the future. So we obtain a thirty by thirty grid of eight pixel each and then we run a class prediction like the image classification for each cell. Let me take you with an example. Here you have a receptive field, so a cell, and then we will classify that. So is it either a background, a ball, a dog, or a toy? That’s basically the idea. So the math behind that, again I’m going to take another example, it’s probably going to be a bit clearer. Here it’s a grayscale image, a 96×96, so we obtain a 12×12 feature map of eight pixels each. What we wanted to do is to keep the interoperability with other models, so we keep using the bounding box to label our images. So here on each region of interest, so the screws, we can draw bounding boxes and then during the training we will only train on the centroid, so only on the cell that is pointed with the red dot. And we obtain a probability of class for each of the cell and we apply some post processing to get rid of the cells that are too close to each other which could lead to the same objects. Note that this can lead to one limitation, that the object that you’re trying to detect, for example in our case our faces, should not be too close to each other, otherwise they could be considered as only one and the one with the higher probability will be kept. So the difference between the two, the bounding boxes using MobileNetV2 SSD FPN and FOMO, one is super fast. FOMO is great for doing real time preprocessing. Both work with bounding box as a labeling method. One has a limitation, it only allows three twenty by three twenty input size, so images. The other, the only limitation, is that the image needs to be square, it can be any size. The MobileNet V2SSD FPN uses only RGB where on firmware you can use grayscale plus RGB. The output is bounding box on the one side which is nice because you can get the size of the objects whereas on the other one it’s only using centroid so you only get the location of the objects. One can run on MCU and the other cannot both support GPUs. And why I have chosen FOMO for our project is also because MobileNetV2 SSD, although it works great with the Jetson, with the recomputer from Seeed, The only thing is it tends to be way better at detecting objects that takes a large portion of the screen. So for example, if you want to put your camera far away from the advertising panel, it will have trouble to detect small objects or smaller faces And it also uses higher compute resources so you cannot process as many frames per second as you would like. We are about, I think for the Jetson recomputer, something around one or two frames second whereas using FOMO we achieve something like thirty five or forty frames per second which is extremely fast. This is just an example that’s on the Raspberry Pi, that’s basically it. So if you want to learn more about FOMO feel free to go on edgeimpulse dot com slash FOMO and then I will go directly to the demo because that’s the thing that is going to interest us. So I built the tutorial, it’s hosted on github so github dot com slash edgeimpulse slash workshopprivacyfriendlyadvertisingpanel. I’m actually going to copy paste the link directly in the chat so it’s going to be easier for everyone if I can. Where is that Chat. And here you have all the tutorial that we created for you. So let’s first dive into topic. So that’s the recomputer Jetson that we’re going to use. I love the form factor, it’s great, it’s really well designed. When you open the box, open a brilliant piece of hardware and then on the left you can see the Soracom Dongle which provides LTE connectivity. It has a SIM card in it, the Soracom SIM card. That’s it. So that’s what we are going to use as the edge device connected to a screen and also attach an external USB camera. I don’t have any link or whatever USB camera can work. The first step will be to build your machine learning model using Edge Impulse. To do so, I invite you to create an account on Impulse. Studio. Edgeimpulse dot com and once you’re in it, will actually start with a blank project and create a new project so that I will show you how to get started and build a machine learning project from scratch. The only thing is that I will need definitely more images than what I can record in just a small session, so I will then switch to another project which is fully trained but for the first part I will guide you through how to do that live. I create a new project and when you’re at can you see properly or you want me to zoom a bit my screen? I know sometimes it’s easier. Okay. I think that it’s good, better for your eyes. So when you create a new project, you have a small wizard explaining to you or guiding you through which kind of project you want to create. In my case, I want an image project and I want to classify multiple objects which is called object detection. I’m going to select that and yes I know what I’m doing, hide this wizard, that’s great. So the first step is to collect some data. You cannot start a machine learning project without any data because the machine learning model will learn on the data. So first you can navigate to the data acquisition tab and you have several options to collect some data. You can use your mobile phone directly by showing you a QR code so you can flash the QR code and connect your phone. Does this work? Edge Impulse and my phone should be connected, to get started and then with my phone I can collect different images like that and probably will be arriving Yeah that’s neat. So that’s one first image that I can collect and I can also collect, well you can gather data from basically any source. If you’ve got some data sets already available in your S3 buckets you can import that as well and you can use your computer to collect some data, give access to the camera and then I can do the same. Okay now I’ve got some pictures, so definitely not enough to create a project, but I have my first images. So I will need to label my data because at the moment I’ve only got images and I don’t have any information about the location of the face, so we provide tools as well to help you to label your images. So here in that case I will draw a bounding box around my face. I can set the label and I can pass on to the next image. Do it like that. This process can be really tedious, so we have different tools to help you. You can track objects between frames or you can classify that using YOLOv5. The only problem with YOLOv5 is that the dataset contains, well, labels contain a label person but not face, so I won’t be able to use that to label those faces. So again on this project I’m just labeling a few images of my face for the model to work good on a wide variety of persons. It’s really important that you have a diverse and ethical dataset, meaning you should have the same number of male versus female of white people versus black and other ethnic save labels, that’s really important if you want to have a production model that goes live. Then once you have your data set labeled, here in my case I’ve only got five items so it’s definitely not enough, those data are put in the training sets. You can also put some data in the test sets which are not going to be used to train the model, but we are going to use that later to test the accuracy of our model. Now I’m just going to quit that project and go back to the other one that I created for you, is called Fomo Bigger Dataset on this one. So I’ve used a subset of the FFHQ dataset which is provided by Flickr, it’s an open source dataset that we can use, and if you go in the data acquisition you can see the different faces that I have. I also collected some faces of my colleagues to do so. So this dataset is a mix of open source dataset and pictures that I collected myself. So once you have like, at the moment I’ve got four hundred items in my training data set and I’ve got something like one hundred items in my test data set. Now that I’ve got enough data that I can move to the create impulse tab, this tab is super important, it will create your machine learning pipeline for you. So for this case I’m going to use a ninety six by ninety six images, so all the images that I’ve got in my data acquisition tab are going to be shrinked before passing that to the pre processing. In this case the pre processing is not super complicated because I will keep the RGB images. Then you can also select a wide variety of pre processing and here it’s only the one available for images but if you’re working with other kinds of machine learning models we have different things for audio, spectral analysis if you’re trying to recognize movements, spectrogram again for audio and different things. And you can also use only the raw data, meaning in our case only the pixels if you wish. I’m going to stick with the image one. And then I’m going to use an object detection model that I’ve selected at the first step when I created the project and I can save my impulse. Once your impulse is saved, it means your pipeline is created, you can navigate to the next tab which is the image one. In that case it’s going to be the pre processing. So it takes the raw features, so those are the pixels of the image, and it will pre process the pixels so it will be easier for the neural network to ingest and to learn on. So here I’m pretty sure it’s just doing a kind of a normalization on the pixels. I can check with one image, can check with another one, it goes from here to here. And I can save the parameters and I can generate the features. So when you generate the features it will extract or convert all the pixels to the normalized array that the neural network will be fed with and you have some information about the on device performances and also the processing time and the max resource consumption, in that case the RAM. Great, so here I’ve got only one class, I’ve got only interfaces, but if you have got several, it’s a good idea to just check at the feature explorer just to see if you can start to distinguish some clusters. If so, it usually means that the neural network will likely learn efficiently. Once done, you can navigate to the Object Detection tab and that’s the machine learning part. So you can set the number of training cycles, the learning rates which are the hyper parameters for your machine learning model. You can set the validation size. So when you have your training data set, you will need some data to establish the weights and you will need to test those data on a validation test, which is different from the test data set. The test dataset is kept apart during the whole training process. This one, the validation set, will be used to just adjust the weights of your neural network. Do we want to use that augmentation? Yes, that depends on the use case but in our case that’s preferable. And then we have several options in the object detection use case. Either the MobileNet v two SSD FPN light that I talked to you about before, which will provide the size of the object, or only the FOMO. We have two alphas here, I think I’ve used the one with the higher alpha, that’s not a big deal, one will be probably a bit less accurate but will be also a bit more lightweight. So then you can click on Start training, it will take a few minutes, and after that you will obtain an F1 score. Well, F1 score is a good metric, it’s probably not the best one, but in our case that’s enough. So we have two versions of the model that is being created for you, one quantized version and one unoptimized. So the quantized uses int8s and the unoptimized uses float32. I know that the Seeed’s recomputer Jetson is way too powerful to use only quantize so I can use the unoptimized one. That’s great because it provides a better F1 score. So here if I have a quick look at the confusion matrix, the background is almost always properly recognized and the faces I’ve got an accuracy of eighty four point six percent based on my data set that I trained it with. So I consider that for workshop and for first proof of concept is more than enough to continue. I can make sure that my model is good enough, so I can click on ‘classify all’ so it will run the inference across my test dataset. And with that I can make sure that the model has not been over trained. So here I’ve got a lower accuracy, so an accuracy of seventy three point five. It’s not great but again for a small data set containing four hundred images I consider that’s good enough for what I want to do. Then you can version your project, you can make it public if you want. This project is public so if you want to have a look at it you can open the public version. I will copy paste as well it’s on the chat so you can sure if you can see the messages I’m copy pasting in the chat. Let me know if not. This project is public, you can clone it and get started from there. Can do that. So once I’ve done that I can navigate to the deployments and we have several options to deploy your machine learning model on edge devices. Most of the people use the C plus plus library when it comes to deployment on MCUs and on our case we are going to use the Linux boards. So for that you need to install on your Linux machine a command line interface so it’s Edge Impulse CLI for Linux. It will automatically detect which kind of architecture are you using and then how to download the thing. So that’s what we are going to do. I’m not going to switch right now on the Seeed’s recomputer, I’m just going to show you how to do that, how to use the CLI interface MacBook Pro here. I’m not sure you can see my terminal, I will just unshare my screen and share it again. Maybe if you have a few questions I can take some of those right now before moving to the next part. So feel free to ask questions either in the chat or in the Q and A. No questions so far? Okay well I’m just going to continue. Share screen and can I share my entire screen? I don’t think I can. So I will do that in two steps. I hope you can see my terminal. I’m just going to zoom a bit. So to download the Edge Impulse model from your let me see the workshop. Workspace workshop. So Okay. You need to use the CLI command line interface, so Edge Impulse Linux runner, and then you can well, I’m just going to do that and clean it. It will ask me for my credentials. So Louis English demo. Great. And this one is the one that I want. I can use this one and it will create a web application. So this one at the moment is only running on my MacBook Pro, it’s exactly the same procedure if you want to run it on the Seeed’s recomputer Jetson. As you can see, obviously my Mac is super fast, but you can detect my face easily. There’s a few, like here for example, sometimes my fingers are recognized on the face, that’s funny. Maybe some of the images in the training set are people having their hands around the face, happens. You just need to have more data to make it more accurate. So that’s oh no, sorry again, you cannot see my screen. Yeah, I was about to say to you, Louis, when you show us your console, we cannot see the bounding boxes. Yeah, I cannot share my whole Chrome. Can you see Yeah. Okay. Awesome. Yeah. It’s fine now. It’s good. Okay. So, yeah, that’s my face, the model that has been trained. It’s completely running, so at the moment, locally, but still on on my MacBook Pro. Said that some of my fingers are recognized as a face, so that’s a false positive. I said that it’s probably because some of the pictures in the dataset I’ve been trained, like when we drew bounding boxes around the face, you might have had a hand, so it recognized some of the fingers as a face. That’s not a big deal if you want to get over it, you just need to have more pictures in your dataset. And how many concurrent objects can you detect on a single frame? So on a ninety six by ninety six, as we divide the height and the width by eight, you can only have a twelve by twelve max which is one hundred and forty four if I’m not mistaken. That’s it. Thanks. Firstly, I think we have a limitation in our SDK for the C plus plus deployments because most of the targets, like the MCUs, won’t be able to support those. So I think we need up to ten fifteen objects, but this can be this can be changed if you have a more powerful thing. Not that at the moment, I’ve got only got my face. Could have, for example, my face plus a dog face, a cat face, which can be two or three different labels. Actually, can have a car, a truck, and a person for example or a bicycle and those can run at the same time. Okay great thank you. Alright, so let me just go back on the tutorial. Preprocessing, mentioned that I’ve explained to you. And this is what I’ve just shown you with my terminal Edge Impulse Linux Runner. Clean, You log in your account and it will automatically download the Edge Impulse model for you, will automatically detect which kind of architecture you have. So in the Jetson, I think it’s an ARMv7 architecture in the Seeed’s recomputer. And yeah this is what I’ve shown you. On this screenshot I was using grayscale images, I noticed that the RGB one worked better so I decided to switch it for the project. And then I’m just going to give you a heads up on how you can integrate this model using our Python SDK to create an application from scratch. To do so you can have a look at the github dot com slash edgeimpulse, I think it’s Linux Python SDK. So this repository contains several examples on how you can do that. On images, I’ve mostly reused most of the probably classified. Py for the project And I want to go through the custom code that we wrote for you, which is the application. Py. It only contains like two hundred lines of code so as you can see the application is really easy. It also includes a small web page that you can find in the template, so index. Html. That will be the rendering for this image with index with background. It’s not super complicated, so it has either a title, this is a background, so an image, this is a live stream. So I’m live streaming different advertisements and here you have another live stream which is the camera feed after the post processing. So it first detects the face and then I’m going to show you how to blur the face on the small application. Py. So it has been written in Python. Note that we have also SDK for Node. Js, think for Go, well we have several depending on what language you’re familiar with, feel free to use them. So what’s interesting in the code, so I’m not going to go through all the lines of code, I’m just going to explain to you how you can retrieve the model that you downloaded through Edge Impulse. So this is the model file that you pass as an argument, then you initiate the Edge Impulse runner, you just declare your you initialize your camera and then you pass the image, so the frame of your camera, into the classifier. This we don’t care because we are not using classification but we are using object detection and that’s actually all the pre processing is done here. So we just set a buffer to count the people, set the inference speed which is displayed as well on the web page. And then for each bounding boxes, for each object that has been detected, we check the values. If the value here is above zero, so we don’t take any For example, if you’re having too many false positives, you can set the threshold to be a bit higher and then we just add the count people, we increase the counter, and the mask is actually done here. We take the image, we create a mask and then we blur the mask and then we reapply that to the image and that’s the image that is going to be displayed. It’s also the image that is going to be forwarded using so great, just enable the flag if you want to use Soracom then create a function send the inference and then send the image. We are going to see that in a minute. Send the inference. Yeah okay I’m just stopping that for a sec and I’m going to introduce I’m going to show you after on the Jetson website but I need to unplug my camera so I’m going to do that after otherwise you won’t have the return on the video. And how to send the inference results with SORACOM? So when you order the USB dongle, this one is called the Onyx LTE USB dongle, it’s provided with a SIM card. So usually when you order one, you have an account set for you and you will find your sim card directly on your account. I’m going to go to console. Soracom dot com Just open a new this one. I won’t be able to. Okay. Let me do that. Console dot sora com dot if I’m not mistaken. Oh great, the credentials are already saved. When you order a SIM card usually you already have one SIM card that is associated with your account. Feel free to correct me both or Nicolas if I’m wrong. I don’t think I had to activate it myself except maybe sending or connecting it to my laptop. You may have got some premium onboarding process, Louis. Okay. Anyway, is usually pretty smooth and fast, so no need to elaborate too much on that. Okay, great to know. So we are going to use three things from SORACOM. First is the data connectivity and the messages, so just like the internet connection that we are going to use. And then we are going to use Soracom harvest Soracom harvest files. No, harvest data and harvest files. And to enable those, you can go to your account and then you can go to the groups, edge impulse group that I created before, and you can enable both the Soracom harvest and Soracom files. So the Soracom harvest data will just enable you to collect some objects. In my case it’s just a people counter, so basically an object that I pass in the code, which is here. The object is count people and count people. This is only what I am going to retrieve and the Soracom harvest file is the blur images. And you need to enable that as well. You have a few configurations to set in here, it’s all written in the tutorial. And once this is set, you can navigate and go to the data storage and visualization and you can check the data, which none of those are present here. Let me auto refresh and I’m going to open up just the recomputer Jetson which is behind me, just need a second to do that. Unfortunately I will need to unplug my camera so that it can work. I just need one second. I know it’s not the smoothest. Okay. Let me do that. Luis, for your information, you did not select any resource in the top left drop down menu, so we might not see the incoming data that way. Oh, correct. Okay. Thank you. So that’s this SIM card. And here they are. And here we are. Okay. I can open the ninety two point let me just check that one point one six. And this is the application that is running on the Jetson. So you can see my back. Okay, that’s me. When you recognize two person, you just display another ad. And that’s me. We’re here. So we have we have a few false positive, which is not is not a big deal. So now you saw that what is happening? You can still see me. Oh, here we have got an issue. Okay. That’s not a problem. Sorry, Louis, a quick question. This amount of false positive, you think it’s mainly due to the quality of the trained model? Yeah, definitely. For example, I’ve just set up the screen behind me, so I haven’t tested the model before the workshop with this location. What is good with Edge Impulse is that you can create custom models, for example, you can adapt the model based on where you want to put your advertising screen. So if you want to put it in the street, can retrain only one model, so take a general model but retrain it with that background and that street or that camera angle so you will know it will work good in that condition. If you want to work or to create a model that is super general, in that case you will need much more data than only four hundred images or at least some data that are more values than the one I’ve used. Actually, you start video? Oh yeah, I forgot that my laptop had a camera as well. So that’s great, now the model is still running so I’m always forwarding the people counter, so the number of persons detected, I am forwarding that every ten seconds and the frames I’m sending them once every minute so that’s good. And once I’ve got the information coming in, Soracom Harvest, I’m going to use one of their tools which is great, it’s Soracom Lagoon, basically it’s a Grafana but already plugged with what you can receive. I’m going to go to my account and oh yes, I’ve got that. Oh no. Let me just I need to find the password. Okay. I’m just going to I I believe you cannot see my screen now. We can see you on the Lagoon login page at the moment. Okay. Yeah. We’ve got another. Sorry. Yeah. So Lagoon being a dashboarding tool that can be used by all the members than the one accessing the console. It has the different credentials. So maybe you only have one store in the browser at the moment. Might be quicker to do a password reset, so that’s another point. Think I Cool. No. What I will do is just unshare my screen and share it again with the right page. Which one are you seeing right now? Still Soracom Lagoon login page. Oh yeah, that one, login. Here we go. Okay so now it’s not seeing anyone, let me just go back and try to show my face. Hopefully, it arrived here. Oh, it may need to refresh. Three minutes. Okay. So that’s the demo effect. I’m not sure why it’s not forwarding the results. Let me try that again. The more we’ve seen impacts it probably has to do with that. So now that all the pre processing is done directly on the Jetson I’m going to stop sharing and then share again. Yeah, we saw on the chart that there were objects coming Yeah, yeah, yeah. We saw the chart going up to two and then getting back to zero. Just going to share again that screen, it’s the one that you can see. Also the way I train the model is I’ve made sure to be as far as possible, well, not as far as possible, but at least one to two meters away from the camera. So if I’m too close to the camera or too far away, the model has not been trained in those conditions. So that’s probably why you have false positive and indeed I haven’t tested configuration with the new setup on my back. Sorry, Louis, another question. For the purpose of the demonstration, has the model been trained on detecting your face or would it detect any human face, whatever the age, gender, ethnicity or whatever? So let me go back to this one, so the FFHQ, that’s also including all the kinds of the face, Yeah exactly, This one is really good because it’s pretty various. I’ve only taken a subset because this one contains millions of images, so I’ve only taken a subset of it. And I’ve added some more images of my face so it’s better at recognizing my face and the faces of some of my colleagues, that’s David Tischler, some other colleagues Omar and Nabil. So obviously our company is not the most diverse one as any tech companies I would say but at least on those ones. I also tried with my newborn kid, it works well you can recognize him pretty well. Yeah that’s the images. I tried to only use images that looks like one point five or two meters away from the cameras. But obviously, this is for demo purpose. Otherwise, I’m pretty sure for advertising cam, you need to have a camera which is way further, so the faces will be way smaller. But you can use some bigger images. I want to check Lagoon again, okay so it came back, I’m going to stop sharing and share again, share screen, this one, sorry for the back and forth, so yeah you can see that every x seconds we’ve got some new data coming in and I wanted to have the I’m not showing my face. Okay. So it’s it’s not showing my my face which is blurred, but it’s actually exactly the same as the one you’ve seen, like, with the with the gray with the green realm which is blurred. This is really nice because by doing so, you can use those anonymized images and take one step further the post processing. For example, if you want to understand how long the people have been in front of the screen or how they interacted with it, did they stay for a while, did they looked about the advertising screen? That’s things that you can do. Same for if you want to analyze in the supermarkets the consumer behavior and you want to do that but you don’t want to collect the faces, you want to have anonymized images, you can apply the same techniques and apply that to video surveillance or dedicated images. There are different techniques that exist to blur or to do on device anonymization. At the moment, most of the technique is done in the cloud. So I’ve seen a lot of videos of anonymization techniques, but by doing so on the device, it really brings the privacy one step further because the image cannot be retrieved in any way any application or any other AI algorithms. That’s it for me for today, for this workshop. I hope you enjoyed it and I would be really happy to answer any questions that you might have. Yeah, feel free to shout. Nico, you had quite a few questions. Do you have any others that I can get I gave it all during the presentation. Alright. Well, have a question, Louis. I I wanna know, is that possible to, like, for when we’re building the the advertisement, can we like detecting the, just differentiate the people like from the, maybe the age or the different group of people to setting up the different, the personalized advertisement, but at the same time that is also the blur interface that’s possible to round the just like the setting of the round the different models? Yeah that’s a really good question indeed we can do that. The only thing is I know that it has been this kind of technique has been criticized a bit by the public lately, how to differentiate even emotions. Microsoft had some great algorithm quite a long time ago and I think they decided to just stop it because it was being criticized in some way, so I didn’t want it to go in that direction. I think you need to think twice about what you want to do before doing so and make sure it respects I’m not really sure if you want to display, for example, with the kids an advertisement of toys versus adults, an advertisement of a toothbrush. I’m pretty sure it’s not a big deal but if you’re mistaken about a kid and an adult then you need to think of what’s the consequences of it. This example is pretty easy, but same for gender identification. Don’t want to enter in that state, we need to think ethically about what we want to do. Yeah cool, yes and also like for the the tenants and for the registers we also posted on our social, We will also give away one the reComputer J1010 that is powered by the Jetson Nano, but that is start from the one hundred ninety nine dollars and it comes with the three US, two, the USB two, and one, the USB three. And yeah, this we’re going to give away that after this webinar. Oh, that’s great. And now that I finished the demo, I’m actually going to unplug it and show it to you. I really love the form factor of the Jetsons on the computer. Yeah. It’s looking like that. And, yeah, I love it. It has every connectivity except maybe Wi Fi. Actually, you can just, if you add on the Wi Fi module, can enable that. So when you press the button at the bottom and you’re gonna open the box and see. Oh, yeah. That’s yeah. And you can easily open it. Just push the magnet. I think it’s green where you can you can open it. Just a a magnet. Yes, correct, exactly. Anyway, yeah, it’s a great product. I think I will leave it on setup over there and test different things on it. If you want to install and make it ready to be compatible with Edge Impulse, we have a documentation web page under the community board, feel free to have a look, it’s all explained how to do so, how to set up the recomputer from We’ve done that work a few months ago with LM. Yes and also we have another example for detecting the helmet detection for some of the construction, the scenarios, yeah, also using that impulse. Great. Well, if there is no other questions, I wish you a very good day, evening, or probably night for you as well. Elaine, what time is it on your side? It’s midnight exactly midnight yeah thanks for being stayed up late for us and I wish you a very good day and very good night bye bye everyone it was a pleasure to have you
Cloud Native
IoT Connectivity Platform
Soracom built the worlds first cloud-native connectivity management platform, built on AWS. Learn more about going beyond connectivity.