ETH Zurich :
Computer Science :
Pervasive Computing :
Distributed Systems :
Student Projects :
Sensing for Semantic User Localization (M)
The goal of this thesis is to investigate how the sensing capabilities of smart devices like smart phones, glasses and, watches can be utilized to achieve a holistic understanding of the user’s surrounding environment. We gather data from several sensors to infer useful characteristics about the scene. Such information is then used to determine the logical or semantic location of the user, whether they are in a coffee shop, clothes store, or facing an ATM machine. Several user-centered applications can benefit from our system like providing customized guidance and feedback depending on the user’s current location.
Real-live images are taken by the smart device’s camera, of the user’s current location, and sophisticated computer vision algorithms are applied to recognize objects and other useful information in the image. The microphone sensor is used to capture the ambient sound of the location, for example whether it is crowded, or empty. Combined with other useful sensor measurements, a probabilistic model is formulated to determine the current logical location of a user with high precision.
Web training images of different locations are collected through image search engines and store websites. System evaluation will take place in indoor and outdoor environments. Candidate locations are shopping malls, shopping streets, department stores, and airports.
Student/Bearbeitet von: Gábor Zogg
- Android Development
- Knowledge of C++ programming
- Background in visual computing
Contact/Ansprechpartner: Marian George