Title: Smart Schoul 2025 – The Future Luxembourg Smart School
Funding source: FNR CORE PSP Flagship
Principal investigator: Dr. Djamila Aouada
Researchers: Dr.Kassem Al Ismaeil, Dr.Enjie Ghorbel, Dr.Abd El Rahman Shabayek
Starting date/ Duration: 01/01/2019 – 36 months

SnT has partnered with the Ministry of National Education represented by its department for the Coordination of Educational and Technological Research and Innovation SCRIPT and the  Lycée Edward Steichen à Clervaux (LESC) in defining the Smart Schoul 2025 project. The goal of this project is to create a fertile environment for pupils to be motivated to participate in designing digital tools and solutions.

Being exposed to computer science at an early age could potentially trigger the switch and inspire a digital to become a digital creator or, at least, an ICT enthusiast. This is the general goal of public outreach activities. However, as reported in many studies, and based on the consortium’s experience within the Luxembourgish environment, two main challenges must be tackled: 1) Lack of motivation among pupils, and 2) Lack of incentives for researchers.

We want to create a framework and a platform for an exciting exchange between ICT researchers and school pupils. Smart Schoul 2025 will embed computer vision and artificial intelligence components that are the core elements used in ICT and smart schools for interaction, such as, gesture control, recognition of people, actions, or objects, behaviour sensing, and object tracking.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.