Position statement for HUC2k workshop: Infrastructure for Smart Devices
Mr. Anssi Kainulainen
Tel: +358 3 215 8874
Distributed user modelling, distributed data storing - distributed responsibility?
At this moment, the TAUCHI group has a project that combines speech user interfaces  and ubiquitous computing . We are designing and implementing a system which would act in receptionist/guide/secretary roles on our premises. We will use a wide variety of contextual information and multimodal output to implement a system which will operate on the following areas:
- security (whom to let in and whom not, at what time, etc.),
- guidance related to physical location (the location of rooms and/or the persons themselves),
- guidance related to staff availability (who is present, who is on the phone/ in a meeting / classroom or otherwise busy), and
- message-relaying related to staff location and availability.
Our goal is to find ways to gather a wide enough variety of contextual information to make conclusions and predictions of usersí actions and preferences reliable, despite the coarseness of individual information sources. I see this sensory fusion of more crude sources as a preferable method compared to single reliable ones, since the user is mobile and his context with the available sensors changes constantly. Another goal is to understand how the output of ubiquitous systems could be made more sensitive towards usersí privacy. This includes both security issues and non-intrusiveness of systems.
There are many issues we have to address during the processes of our project:
- locating and recognizing persons with limited-capability equipment,
- contextual sensing of persons, and
- presentational problems in an ever changing social and technological environment.
As people wander around the premises, the available input and output hardware and software environments vary all the time. If all the information gathered in this manner is processed and stored centrally, it amounts into a huge bulk of information, which requires heavy processing and is vulnerable to information privacy, integrity and security problems. In order to avoid excess traffic and storing, much of the information should be processed locally as near as possible to the corresponding equipment.
This data-abstraction reduces the needed traffic, divides the processing tasks more evenly and also lessens the information integrity risks, as only the called for and pre-processed abstract information is sent forward. This security-feature is emphasized even more while using personal (such as wearable) hardware, as the user has the final word on what is gathered with his equipment, and what is made available to outside systems. He can set different levels of abstraction which to use for different parties. All information is available to himself and in private settings he trusts. Some information is given to his friends and social situations including them. Very abstract information, such as "a person is walking here" instead of "a person with this name and these habits etc. walks here", is given to surrounding systems in a shopping-street environment.
The biggest problem with this idea, is how to define the levels of abstraction. Equipment may sense for example that a person uses a door at 30 minute intervals. A model of the premises may indicate that a smoking place is located near this door. Together this information implies that a person goes outside and has a cigarette every 30 minutes. If all that information is available for everyone, it is clearly an unacceptable "big brother" situation. If no information is given, the system cannot even guess where the person is at a given moment, which isnít helpful either.
Each component has responsibility about itís information, and they cannot tell how crucial and/or harmful each piece of information might be combined with other information. How can we define the levels of abstraction so that this distributed responsibility does not equal no responsibility?
 User Interfaces for Ubiquitous Computing (http://www.cs.uta.fi/research/hci/ubi/
 Adaptive Speech User Interfaces (http://www.cs.uta.fi/research/hci/SUI/