SEMIoTICS: Bringing artificial intelligence to the edge: low-power embedded solutions

This project aims at bringing Artificial Intelligence (AI) for Computer Vision (CV) to the edge in the paradigm of Internet of Things (IoT). The project comprises four main goals, namely; visual object detection and tracking on embedded devices, mixed-mode on-chip CMOS hyperdimensional computing, time-of-flight sensors with CMOS camera events, and strategies for low-power circuit design.

Visual object detection and tracking on embedded devices addresses state-of-the-art deep learning models or deep neural networks (DNN) on FPGAs in the range of 5Ws. We set the design of smart DNN cameras made up of CMOS vision sensors designed throughout the project combined with said FPGAs running DNNs to build up an edge device with AI. On-chip CMOS hyperdimensional computing is another paradigm as an alternative to DNN. Its error-resilience is a valuable asset for their mixed-mode implementation. We aim at a custom mixed-signal design for object detection at video rate processing with a power consumption in the range of 100 mW.

On-chip or CMOS event cameras and time-of-flight sensors are also tackled in the project with novel approaches. The first one will be addressed through synchronous sensors. The second one will be done by means of multiple frequency measurements to solve depth ambiguities or cope with multipath interference. The CMOS event camera is envisioned for moving platforms, and it will also be combined with FPGAs running DNN for AI inference on the edge.

Finally, the designs tackled throughout the project are power-aware, with strategies for low-power consumption along the design hierarchy in order to extend battery life of the AI edge system.


  • Visual Object Detection and Tracking on Embedded Devices
  • Mixed-Mode On-Chip CMOS Hyperdimensional Computing
  • On-chip Event Generation and Time-of-Flight Sensors
  • Strategies for low-power circuit design