# Usergroup meeting 4 AI@EDGE

Date: 29 April 2022
Location: Online - Teams

  • Present:
    • Kris Bellemans (E.D.&A.)
    • Nicolas Maes (Picanol)
    • Peter Papics (Transport & Mobility Leuven)
    • Luc Buydens (Melexis)
    • Chandu Kancharla (KU Leuven - Brugge)
    • Patrick Puissant (Picanol)
    • Edouard Charvet (Scioteq)
  • Excused:
    • Fabrice Verhaert (6Wolves)
    • Gert Van de Wouwer (Digipolis)
    • Dieter Therssen (DSP Valley)
    • Patrick Smout (E.D.&A.)
    • Frank Allemeersch (Sensotec)
    • Kris Vanherle (Transport & Mobility Leuven)
    • Ben Minnaert (Odisee)
    • Nick Destrycker (Edgise)
    • Willem Romanus (6Wolves)
    • Anke Van Campen (VLAIO - project advisor)
  • Project team members:
    • Toon Goedemé (KU Leuven - De Nayer)
    • Kristof Van Beeck (KU Leuven - De Nayer)
    • Maarten Vandersteegen (KU Leuven De Nayer)
    • Sille Van Landschoot (VIVES)
    • Jonas Lannoo (VIVES)

# Presentation Slides

Preview:

Open the presentation in a new page:
User group meeting slides

# Introduction

Introduction and practicalities, administrative business of the project:

The composition of the User Group is finalised via the rule of order documents. An update was given on the workplan, progress of the milestones and work packages. An overview of the final user group members is shown in slide 4. We have a new member of the user group, The LarbitsSisters who have a Larbitslab, an artist duo who makes art with and about technology. Because Piet Cordemans has been absent for a period over new-year, Jonas Lannoo took over the project coordination until the end of the project. We have developed two types of workshops, two levels of difficulty. One which has an introduction to DL and uses minimal programming and the online tool Edge Impulse to train and create AI models and put them on an embedded device. Finally, it is mentioned that we got an extension of the project due to the covid pandemic, so the end of the project is scheduled for next month, end of May.

The workplan is shown which was proposed in the project documents. Four work packages which consist of subpackages. The progress of the work packages is indicated with progress bars to indicate the progress of each step and the progress of the milestones that comes with that work package.

# Academic use-cases

The goals, challenges, approach, and steps to do the AB writing use case was explained. The whole toolchain from data collection to deployment with Edge Impulse is shown. The first hands-on workshop has been done and new ones are planned, for KU Leuven students and for Industry. The seat detection case has been solved by two student groups in two different ways. The car detection workshop is explained and organised for secondary school students in STEM sessions. The workshop uses Edge Impulse. In addition, the workshop was organised for industry on 22/04/2022, to detect cars on an ESP-EYE module. Should this workshop be repeated?

# Industrial use-cases

The use cases from Melexis, E.D.&A., TML, Yogalife (6Wolves), the LarbitsSisters and Gemeente Sint-Katelijne-Waver are explained. An update was given on the performance and model size of the Melexis use-case. The implementation and preparation for the E.D.&A. use case was given. The data had to be prepared, relabelled, and imported in to Edge Impulse. A demo was given when it was running on an ARM Cortex M4 microcontroller. Next step is to implement is on a smaller microcontroller and optimize the Neural Network. The goal and results from the TML use case were explained. Suggestions were given on how to improve the model and system. The use case by 6Wolves is exploited. The goal is to detect a squat with one or more IMU’s connected to the body. The work will be done by a student doing a bachelor’s thesis. He must follow a few steps because there is no data available. The challenges were indicated. The LarbitsSisters duo have an art concept, the CMC (Crypto Miner Car) which are four autonomous vehicles driving around, each indicating a cryptocurrency. Under the platform where they drive, a computer is mining those cryptocurrencies. The goal is to let the vehicles drive autonomously around in a track (which also has the shape of the cryptocurrency logo) and let them automatically charge. The energy to charge them is provided by recovering the heat generated by the GPU’s that are mining the cryptocurrencies. The problem here is that the autonomous driving of the vehicles does not work well. They have a single camera on-board and three distance sensors. The AI learning-based component should be embedded in a Raspberry Pi 3. The solution here was to use the Floodfill algorithm, divide the camera image in 11 segments and find the furthest point. The best driving direction was decided via Q-learning. The three distance sensors complement the vision algorithm to keep the car on track and not bump into the sides. The conclusion was that the algorithm works, but the car is too large for the track, it is unable to rotate 180 degrees (dead ends), it is mechanically weak, it is slow and has issues with illumination/lighting. The car has been re-designed and uses 7 distance sensors without camera. It uses the force-field algorithm to find its path. This resulted in a much faster and more reliable vehicle. Finally, the use case for Sint-Katelijne-Waver was to catch stray cats via a trap cage. The system should emit a signal when a cat is caught. The cat will be detected via AI and a camera.

# Future Plans

Finalisation of the manual and use-cases, organisation of the final symposium, advertisement for the summerschool on AI, organised by the postuniversitairy centrum of KU Leuven.

# Discussion

  • Luc Buydens (Melexis): Question about GDPR, how to continue with the issue about capturing people with a camera. How to continue that we want to use this data on our own company microcontroller, problem with the GDPR. We are bounded to it and the question triggers other actions. Nevertheless, result is very nice. Proof of concept is ok.
  • Kris Bellemans (E.D.&A.): question to Sille, possible future work for use case, deploy the AI model on cortex M0? Can it be done by the end symposium?
    • Yes, it is using the mbed framework, so can be easily switched from target.
  • Jonas Lannoo (VIVES): How did TML tackle the GDPR problem?
    • Peter Papics (TML): They used data from students some years ago, when gdpr rules were not active yet
  • Toon Goedemé (KU Leuven): New project, automatically blurs faces to solve this issue.