Loading…

Welcome to www.florian-alt.org

Please enjoy exploring my website! Do not hesitate to get in touch with me!

Awards

Publications

  • Best of CHI Honorable Mention Award, CHI’18
    M. Khamis, C. Becker, A. Bulling, and F. Alt, “Which one is me?: identifying oneself on public displays,” in Proceedings of the 2018 chi conference on human factors in computing systems, New York, NY, USA, 2018, p. 287:1–287:12. doi:10.1145/3173574.3173861
    [BibTeX] [Abstract] [Download PDF]
    While user representations are extensively used on public displays, it remains unclear how well users can recognize their own representation among those of surrounding users. We study the most widely used representations: abstract objects, skeletons, silhouettes and mirrors. In a prestudy (N=12), we identify five strategies that users follow to recognize themselves on public displays. In a second study (N=19), we quantify the users’ recognition time and accuracy with respect to each representation type. Our findings suggest that there is a significant effect of (1) the representation type, (2) the strategies performed by users, and (3) the combination of both on recognition time and accuracy. We discuss the suitability of each representation for different settings and provide specific recommendations as to how user representations should be applied in multi-user scenarios. These recommendations guide practitioners and researchers in selecting the representation that optimizes the most for the deployment’s requirements, and for the user strategies that are feasible in that environment.
    @InProceedings{khamis2018chi1,
    author = {Khamis, Mohamed and Becker, Christian and Bulling, Andreas and Alt, Florian},
    title = {Which One is Me?: Identifying Oneself on Public Displays},
    booktitle = {Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems},
    year = {2018},
    series = {CHI '18},
    pages = {287:1--287:12},
    address = {New York, NY, USA},
    publisher = {ACM},
    note = {khamis2018chi1},
    abstract = {While user representations are extensively used on public displays, it remains unclear how well users can recognize their own representation among those of surrounding users. We study the most widely used representations: abstract objects, skeletons, silhouettes and mirrors. In a prestudy (N=12), we identify five strategies that users follow to recognize themselves on public displays. In a second study (N=19), we quantify the users' recognition time and accuracy with respect to each representation type. Our findings suggest that there is a significant effect of (1) the representation type, (2) the strategies performed by users, and (3) the combination of both on recognition time and accuracy. We discuss the suitability of each representation for different settings and provide specific recommendations as to how user representations should be applied in multi-user scenarios. These recommendations guide practitioners and researchers in selecting the representation that optimizes the most for the deployment's requirements, and for the user strategies that are feasible in that environment.},
    acmid = {3173861},
    articleno = {287},
    doi = {10.1145/3173574.3173861},
    isbn = {978-1-4503-5620-6},
    keywords = {multiple users, public displays, user representations},
    location = {Montreal QC, Canada},
    numpages = {12},
    timestamp = {2018.05.01},
    url = {http://www.florian-alt.org/unibw/wp-content/publications/khamis2018chi1.pdf},
    }
  • Best Poster Award, AutoUI’18
    M. Braun, F. Roider, F. Alt, and T. Gross, “Automotive research in the public space: towards deployment-based prototypes for real users,” in Proceedings of the 10th international conference on automotive user interfaces and interactive vehicular applications, New York, NY, USA, 2018, p. 181–185. doi:10.1145/3239092.3265964
    [BibTeX] [Abstract] [Download PDF]
    Many automotive user studies allow users to experience and evaluate interactive concepts. They are however often limited to small and specific groups of participants, such as students or experts. This might limit the generalizability of results for future users. A possible solution is to allow a large group of unbiased users to actively experience an interactive prototype and generate new ideas, but there is little experience about the realization and benefits of such an approach. We placed an interactive prototype in a public space and gathered objective and subjective data from 693 participants over the course of three months. We found a high variance in data quality and identified resulting restrictions for suitable research questions. This results in concrete requirements to hardware, software, and analytics, e.g. the need for assessing data quality, and give examples how this approach lets users explore a system and give first-contact feedback which differentiates highly from common in-depth expert analyses.
    @InProceedings{braun2018autouiadj2,
    author = {Braun, Michael and Roider, Florian and Alt, Florian and Gross, Tom},
    title = {Automotive Research in the Public Space: Towards Deployment-Based Prototypes For Real Users},
    booktitle = {Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications},
    year = {2018},
    series = {AutomotiveUI '18},
    pages = {181--185},
    address = {New York, NY, USA},
    publisher = {ACM},
    note = {braun2018autouiadj2},
    abstract = {Many automotive user studies allow users to experience and evaluate interactive concepts. They are however often limited to small and specific groups of participants, such as students or experts. This might limit the generalizability of results for future users. A possible solution is to allow a large group of unbiased users to actively experience an interactive prototype and generate new ideas, but there is little experience about the realization and benefits of such an approach. We placed an interactive prototype in a public space and gathered objective and subjective data from 693 participants over the course of three months. We found a high variance in data quality and identified resulting restrictions for suitable research questions. This results in concrete requirements to hardware, software, and analytics, e.g. the need for assessing data quality, and give examples how this approach lets users explore a system and give first-contact feedback which differentiates highly from common in-depth expert analyses.},
    acmid = {3265964},
    doi = {10.1145/3239092.3265964},
    isbn = {978-1-4503-5947-4},
    keywords = {Automotive UI, Deployment, Prototypes, User Studies},
    location = {Toronto, ON, Canada},
    numpages = {5},
    timestamp = {2018.10.05},
    url = {http://www.florian-alt.org/unibw/wp-content/publications/braun2018autouiadj2.pdf},
    }
  • Honorable Mention Award, MUM’17
    M. Khamis, L. Bandelow, S. Schick, D. Casadevall, A. Bulling, and F. Alt, “They are all after you: investigating the viability of a threat model that involves multiple shoulder surfers,” in Proceedings of the 16th international conference on mobile and ubiquitous multimedia, New York, NY, USA, 2017, p. 31–35. doi:10.1145/3152832.3152851
    [BibTeX] [Abstract] [Download PDF]
    Many of the authentication schemes for mobile devices that were proposed lately complicate shoulder surfing by splitting the attacker’s attention into two or more entities. For example, multimodal authentication schemes such as GazeTouchPIN and GazeTouchPass require attackers to observe the user’s gaze input and the touch input performed on the phone’s screen. These schemes have always been evaluated against single observers, while multiple observers could potentially attack these schemes with greater ease, since each of them can focus exclusively on one part of the password. In this work, we study the effectiveness of a novel threat model against authentication schemes that split the attacker’s attention. As a case study, we report on a security evaluation of two state of the art authentication schemes in the case of a team of two observers. Our results show that although multiple observers perform better against these schemes than single observers, multimodal schemes are significantly more secure against multiple observers compared to schemes that employ a single modality. We discuss how this threat model impacts the design of authentication schemes.
    @InProceedings{khamis2017mum,
    author = {Khamis, Mohamed and Bandelow, Linda and Schick, Stina and Casadevall, Dario and Bulling, Andreas and Alt, Florian},
    title = {They Are All After You: Investigating the Viability of a Threat Model That Involves Multiple Shoulder Surfers},
    booktitle = {Proceedings of the 16th International Conference on Mobile and Ubiquitous Multimedia},
    year = {2017},
    series = {MUM '17},
    pages = {31--35},
    address = {New York, NY, USA},
    publisher = {ACM},
    note = {khamis2017mum},
    abstract = {Many of the authentication schemes for mobile devices that were proposed lately complicate shoulder surfing by splitting the attacker's attention into two or more entities. For example, multimodal authentication schemes such as GazeTouchPIN and GazeTouchPass require attackers to observe the user's gaze input and the touch input performed on the phone's screen. These schemes have always been evaluated against single observers, while multiple observers could potentially attack these schemes with greater ease, since each of them can focus exclusively on one part of the password. In this work, we study the effectiveness of a novel threat model against authentication schemes that split the attacker's attention. As a case study, we report on a security evaluation of two state of the art authentication schemes in the case of a team of two observers. Our results show that although multiple observers perform better against these schemes than single observers, multimodal schemes are significantly more secure against multiple observers compared to schemes that employ a single modality. We discuss how this threat model impacts the design of authentication schemes.},
    acmid = {3152851},
    doi = {10.1145/3152832.3152851},
    isbn = {978-1-4503-5378-6},
    keywords = {gaze gestures, multimodal authentication, multiple observers, privacy, shoulder surfing, threat model},
    location = {Stuttgart, Germany},
    numpages = {5},
    timestamp = {2017.11.26},
    url = {http://www.florian-alt.org/unibw/wp-content/publications/khamis2017mum.pdf},
    }
  • Best Paper Award, IDC’17
    R. Linke, T. Kothe, and F. Alt, “Tabooga: a hybrid learning app to support children’s reading motivation,” in Proceedings of the 2017 conference on interaction design and children, New York, NY, USA, 2017, p. 278–285. doi:10.1145/3078072.3079712
    [BibTeX] [Abstract] [Download PDF]
    In this paper we present TaBooGa (Tangible Book Game), a hybrid learning application we developed to increase children’s reading motivation. As children are exposed to digital devices early on (e.g., smart phones and tablets) weak readers are particularly apt to prefer digital offers over reading traditional books. Prior work has shown that ebooks can partially address this challenge by making reading more compelling for children. In this work we show that augmenting ebooks with tangible elements can further increase the reading motivation. In particular, we embed tangible elements that allow for navigating through the book as well as in the form of mini-games that interlace the reading task. We report on the results of an evaluation among 22 primary school pupils, comparing the influence of the approach on both strong and weak readers. Our results show a positive influence beyond reading motivation on both weak and strong readers. Yet, the approach requires to strive a balance between the tangible elements being motivating while at the same time not being too distracting.
    @InProceedings{linke2017idc,
    author = {Linke, Rebecca and Kothe, Tina and Alt, Florian},
    title = {TaBooGa: A Hybrid Learning App to Support Children's Reading Motivation},
    booktitle = {Proceedings of the 2017 Conference on Interaction Design and Children},
    year = {2017},
    series = {IDC '17},
    pages = {278--285},
    address = {New York, NY, USA},
    publisher = {ACM},
    note = {linke2017idc},
    abstract = {In this paper we present TaBooGa (Tangible Book Game), a hybrid learning application we developed to increase children's reading motivation. As children are exposed to digital devices early on (e.g., smart phones and tablets) weak readers are particularly apt to prefer digital offers over reading traditional books. Prior work has shown that ebooks can partially address this challenge by making reading more compelling for children. In this work we show that augmenting ebooks with tangible elements can further increase the reading motivation. In particular, we embed tangible elements that allow for navigating through the book as well as in the form of mini-games that interlace the reading task. We report on the results of an evaluation among 22 primary school pupils, comparing the influence of the approach on both strong and weak readers. Our results show a positive influence beyond reading motivation on both weak and strong readers. Yet, the approach requires to strive a balance between the tangible elements being motivating while at the same time not being too distracting.},
    acmid = {3079712},
    doi = {10.1145/3078072.3079712},
    isbn = {978-1-4503-4921-5},
    keywords = {book-app, hybrid, literature, motivation, reading, tangible},
    location = {Stanford, California, USA},
    numpages = {8},
    timestamp = {2017.05.02},
    url = {http://www.florian-alt.org/unibw/wp-content/publications/linke2017idc.pdf},
    }
  • Best of CHI Honorable Mention Award, CHI’17
    Y. Abdelrahman, M. Khamis, S. Schneegass, and F. Alt, “Stay cool! understanding thermal attacks on mobile-based user authentication,” in Proceedings of the 35th annual acm conference on human factors in computing systems, New York, NY, USA, 2017. doi:10.1145/3025453.3025461
    [BibTeX] [Abstract] [Download PDF]
    PINs and patterns remain among the most widely used knowledge-based authentication schemes. As thermal cameras become ubiquitous and affordable, we foresee a new form of threat to user privacy on mobile devices. Thermal cameras allow performing thermal attacks, where heat traces, resulting from authentication, can be used to reconstruct passwords. In this work we investigate in details the viability of exploiting thermal imaging to infer PINs and patterns on mobile devices. We present a study (N=18) where we evaluated how properties of PINs and patterns influence their thermal attacks resistance. We found that thermal attacks are indeed viable on mobile devices; overlapping patterns significantly decrease successful thermal attack rate from 100% to 16.67%, while PINs remain vulnerable (>72% success rate) even with duplicate digits. We conclude by recommendations for users and designers of authentication schemes on how to resist thermal attacks.
    @InProceedings{abdelrahman2017chi,
    author = {Abdelrahman, Yomna and Khamis, Mohamed and Schneegass, Stefan and Alt, Florian},
    title = {Stay Cool! Understanding Thermal Attacks on Mobile-based User Authentication},
    booktitle = {Proceedings of the 35th Annual ACM Conference on Human Factors in Computing Systems},
    year = {2017},
    series = {CHI '17},
    address = {New York, NY, USA},
    publisher = {ACM},
    note = {abdelrahman2017chi},
    abstract = {PINs and patterns remain among the most widely used knowledge-based authentication schemes. As thermal cameras become ubiquitous and affordable, we foresee a new form of threat to user privacy on mobile devices. Thermal cameras allow performing thermal attacks, where heat traces, resulting from authentication, can be used to reconstruct passwords. In this work we investigate in details the viability of exploiting thermal imaging to infer PINs and patterns on mobile devices. We present a study (N=18) where we evaluated how properties of PINs and patterns influence their thermal attacks resistance. We found that thermal attacks are indeed viable on mobile devices; overlapping patterns significantly decrease successful thermal attack rate from 100% to 16.67%, while PINs remain vulnerable (>72% success rate) even with duplicate digits. We conclude by recommendations for users and designers of authentication schemes on how to resist thermal attacks.},
    doi = {10.1145/3025453.3025461},
    location = {Denver, CO, USA},
    timestamp = {2017.05.12},
    url = {http://www.florian-alt.org/unibw/wp-content/publications/abdelrahman2017chi.pdf},
    }
  • Best of CHI Honorable Mention Award, CHI’17
    D. Buschek and F. Alt, “Probui: generalising touch target representations to enable declarative gesture definition for probabilistic guis,” in Proceedings of the 2017 chi conference on human factors in computing systems, New York, NY, USA, 2017, p. 4640–4653. doi:10.1145/3025453.3025502
    [BibTeX] [Abstract] [Download PDF]
    We present ProbUI, a mobile touch GUI framework that merges ease of use of declarative gesture definition with the benefits of probabilistic reasoning. It helps developers to handle uncertain input and implement feedback and GUI adaptations. ProbUI replaces today’s static target models (bounding boxes) with probabilistic gestures (“bounding behaviours”). It is the first touch GUI framework to unite concepts from three areas of related work: 1) Developers declaratively define touch behaviours for GUI targets. As a key insight, the declarations imply simple probabilistic models (HMMs with 2D Gaussian emissions). 2) ProbUI derives these models automatically to evaluate users’ touch sequences. 3) It then infers intended behaviour and target. Developers bind callbacks to gesture progress, completion, and other conditions. We show ProbUI’s value by implementing existing and novel widgets, and report developer feedback from a survey and a lab study.
    @InProceedings{buschek2017chi,
    author = {Buschek, Daniel and Alt, Florian},
    title = {ProbUI: Generalising Touch Target Representations to Enable Declarative Gesture Definition for Probabilistic GUIs},
    booktitle = {Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems},
    year = {2017},
    series = {CHI '17},
    pages = {4640--4653},
    address = {New York, NY, USA},
    publisher = {ACM},
    note = {buschek2017chi},
    abstract = {We present ProbUI, a mobile touch GUI framework that merges ease of use of declarative gesture definition with the benefits of probabilistic reasoning. It helps developers to handle uncertain input and implement feedback and GUI adaptations. ProbUI replaces today's static target models (bounding boxes) with probabilistic gestures ("bounding behaviours"). It is the first touch GUI framework to unite concepts from three areas of related work: 1) Developers declaratively define touch behaviours for GUI targets. As a key insight, the declarations imply simple probabilistic models (HMMs with 2D Gaussian emissions). 2) ProbUI derives these models automatically to evaluate users' touch sequences. 3) It then infers intended behaviour and target. Developers bind callbacks to gesture progress, completion, and other conditions. We show ProbUI's value by implementing existing and novel widgets, and report developer feedback from a survey and a lab study.},
    acmid = {3025502},
    doi = {10.1145/3025453.3025502},
    isbn = {978-1-4503-4655-9},
    keywords = {gui framework, probabilistic modelling, touch gestures},
    location = {Denver, Colorado, USA},
    numpages = {14},
    timestamp = {2017.05.10},
    url = {http://www.florian-alt.org/unibw/wp-content/publications/buschek2017chi.pdf},
    }
  • Honorable Mention Award, MobileHCI’15
    D. Buschek, A. De Luca, and F. Alt, “There is more to typing than speed: expressive mobile touch keyboards via dynamic font personalisation,” in Proceedings of the 17th international conference on human-computer interaction with mobile devices and services, New York, NY, USA, 2015, p. 125–130. doi:10.1145/2785830.2785844
    [BibTeX] [Abstract] [Download PDF]
    Typing is a common task on mobile devices and has been widely addressed in HCI research, mostly regarding quantitative factors such as error rates and speed. Qualitative aspects, like personal expressiveness, have received less attention. This paper makes individual typing behaviour visible to the users to render mobile typing more personal and expressive in varying contexts: We introduce a dynamic font personalisation framework, TapScript, which adapts a finger-drawn font according to user behaviour and context, such as finger placement, device orientation and movements – resulting in a handwritten-looking font. We implemented TapScript for evaluation with an online survey (N=91) and a field study with a chat app (N=11). Looking at resulting fonts, survey participants distinguished pairs of typists with 84.5% accuracy and walking/sitting with 94.8%. Study participants perceived fonts as individual and the chat experience as personal. They also made creative explicit use of font adaptations.
    @InProceedings{buschek2015mobilehci,
    author = {Buschek, Daniel and De Luca, Alexander and Alt, Florian},
    title = {There is More to Typing Than Speed: Expressive Mobile Touch Keyboards via Dynamic Font Personalisation},
    booktitle = {Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services},
    year = {2015},
    series = {MobileHCI '15},
    pages = {125--130},
    address = {New York, NY, USA},
    publisher = {ACM},
    note = {buschek2015mobilehci},
    abstract = {Typing is a common task on mobile devices and has been widely addressed in HCI research, mostly regarding quantitative factors such as error rates and speed. Qualitative aspects, like personal expressiveness, have received less attention. This paper makes individual typing behaviour visible to the users to render mobile typing more personal and expressive in varying contexts: We introduce a dynamic font personalisation framework, TapScript, which adapts a finger-drawn font according to user behaviour and context, such as finger placement, device orientation and movements - resulting in a handwritten-looking font. We implemented TapScript for evaluation with an online survey (N=91) and a field study with a chat app (N=11). Looking at resulting fonts, survey participants distinguished pairs of typists with 84.5% accuracy and walking/sitting with 94.8%. Study participants perceived fonts as individual and the chat experience as personal. They also made creative explicit use of font adaptations.},
    acmid = {2785844},
    doi = {10.1145/2785830.2785844},
    isbn = {978-1-4503-3652-9},
    keywords = {Font Personalisation, Mobile, Touch Typing},
    location = {Copenhagen, Denmark},
    numpages = {6},
    timestamp = {2015.08.23},
    url = {http://www.florian-alt.org/unibw/wp-content/publications/buschek2015mobilehci.pdf},
    }

  • Best of CHI Honorable Mention Award, CHI’15
    M. Pfeiffer, T. Dünte, S. Schneegass, F. Alt, and M. Rohs, “Cruise control for pedestrians: controlling walking direction using electrical muscle stimulation,” in Proceedings of the 33rd annual acm conference on human factors in computing systems, New York, NY, USA, 2015, p. 2505–2514. doi:10.1145/2702123.2702190
    [BibTeX] [Abstract] [Download PDF]
    Pedestrian navigation systems require users to perceive, interpret, and react to navigation information. This can tax cognition as navigation information competes with information from the real world. We propose actuated navigation, a new kind of pedestrian navigation in which the user does not need to attend to the navigation task at all. An actuation signal is directly sent to the human motor system to influence walking direction. To achieve this goal we stimulate the sartorius muscle using electrical muscle stimulation. The rotation occurs during the swing phase of the leg and can easily be counteracted. The user therefore stays in control. We discuss the properties of actuated navigation and present a lab study on identifying basic parameters of the technique as well as an outdoor study in a park. The results show that our approach changes a user’s walking direction by about 16°/m on average and that the system can successfully steer users in a park with crowded areas, distractions, obstacles, and uneven ground.
    @InProceedings{pfeiffer2015chi,
    author = {Pfeiffer, Max and D\"{u}nte, Tim and Schneegass, Stefan and Alt, Florian and Rohs, Michael},
    title = {Cruise Control for Pedestrians: Controlling Walking Direction Using Electrical Muscle Stimulation},
    booktitle = {Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems},
    year = {2015},
    series = {CHI '15},
    pages = {2505--2514},
    address = {New York, NY, USA},
    publisher = {ACM},
    note = {pfeiffer2015chi},
    abstract = {Pedestrian navigation systems require users to perceive, interpret, and react to navigation information. This can tax cognition as navigation information competes with information from the real world. We propose actuated navigation, a new kind of pedestrian navigation in which the user does not need to attend to the navigation task at all. An actuation signal is directly sent to the human motor system to influence walking direction. To achieve this goal we stimulate the sartorius muscle using electrical muscle stimulation. The rotation occurs during the swing phase of the leg and can easily be counteracted. The user therefore stays in control. We discuss the properties of actuated navigation and present a lab study on identifying basic parameters of the technique as well as an outdoor study in a park. The results show that our approach changes a user's walking direction by about 16°/m on average and that the system can successfully steer users in a park with crowded areas, distractions, obstacles, and uneven ground.},
    acmid = {2702190},
    doi = {10.1145/2702123.2702190},
    isbn = {978-1-4503-3145-6},
    keywords = {actuated navigation, electrical muscle stimulation, haptic feedback, pedestrian navigation, wearable devices},
    location = {Seoul, Republic of Korea},
    numpages = {10},
    timestamp = {2015.04.28},
    url = {http://www.florian-alt.org/unibw/wp-content/publications/pfeiffer2015chi.pdf},
    }

  • Best Paper Award, CHI’12
    J. Müller, R. Walter, G. Bailly, M. Nischt, and F. Alt, “Looking Glass: A Field Study on Noticing Interactivity of a Shop Window,” in Proceedings of the 2012 ACM Conference on Human Factors in Computing Systems, New York, NY, USA, 2012, p. 297–306. doi:10.1145/2207676.2207718
    [BibTeX] [Abstract] [Download PDF]
    In this paper we present our findings from a lab and a field study investigating how passers-by notice the interactivity of public displays. We designed an interactive installation that uses visual feedback to the incidental movements of passers-by to communicate its interactivity. The lab study reveals: (1) Mirrored user silhouettes and images are more effective than avatar-like representations. (2) It takes time to notice the interactivity (approx. 1.2s). In the field study, three displays were installed during three weeks in shop windows, and data about 502 interaction sessions were collected. Our observations show: (1) Significantly more passers-by interact when immediately showing the mirrored user image (+90%) or silhouette (+47%) compared to a traditional attract sequence with call-to-action. (2) Passers-by often notice interactivity late and have to walk back to interact (the landing effect). (3) If somebody is already interacting, others begin interaction behind the ones already interacting, forming multiple rows (the honeypot effect). Our findings can be used to design public display applications and shop windows that more effectively communicate interactivity to passers-by.
    @InProceedings{mueller2012chi,
    author = {M\"{u}ller, J\"{o}rg and Walter, Robert and Bailly, Gilles and Nischt, Michael and Alt, Florian},
    title = {{Looking Glass: A Field Study on Noticing Interactivity of a Shop Window}},
    booktitle = {{Proceedings of the 2012 ACM Conference on Human Factors in Computing Systems}},
    year = {2012},
    series = {CHI'12},
    pages = {297--306},
    address = {New York, NY, USA},
    month = {apr},
    publisher = {ACM},
    note = {mueller2012chi},
    abstract = {In this paper we present our findings from a lab and a field study investigating how passers-by notice the interactivity of public displays. We designed an interactive installation that uses visual feedback to the incidental movements of passers-by to communicate its interactivity. The lab study reveals: (1) Mirrored user silhouettes and images are more effective than avatar-like representations. (2) It takes time to notice the interactivity (approx. 1.2s). In the field study, three displays were installed during three weeks in shop windows, and data about 502 interaction sessions were collected. Our observations show: (1) Significantly more passers-by interact when immediately showing the mirrored user image (+90%) or silhouette (+47%) compared to a traditional attract sequence with call-to-action. (2) Passers-by often notice interactivity late and have to walk back to interact (the landing effect). (3) If somebody is already interacting, others begin interaction behind the ones already interacting, forming multiple rows (the honeypot effect). Our findings can be used to design public display applications and shop windows that more effectively communicate interactivity to passers-by.},
    acmid = {2207718},
    doi = {10.1145/2207676.2207718},
    isbn = {978-1-4503-1015-4},
    keywords = {interactivity, noticing interactivity, public displays, User representation},
    location = {Austin, Texas, USA},
    numpages = {10},
    owner = {flo},
    timestamp = {2012.05.01},
    url = {http://www.florian-alt.org/unibw/wp-content/publications/mueller2012chi.pdf},
    }

  • Best Paper Award, AMI’09
    F. Alt, M. Balz, S. Kristes, A. S. Shirazi, J. Mennenöh, A. Schmidt, H. Schröder, and M. Gödicke, “Adaptive User Profiles in Pervasive Advertising Environments,” in Proceedings of the european conference on ambient intelligence, Berlin, Heidelberg, 2009, p. 276–286. doi:10.1007/978-3-642-05408-2_32
    [BibTeX] [Abstract] [Download PDF]
    Nowadays modern advertising environments try to provide more efficient ads by targeting costumers based on their interests. Various approaches exist today as to how information about the users’ interests can be gathered. Users can deliberately and explicitly provide this information or user’s shopping behaviors can be analyzed implicitly. We implemented an advertising platform to simulate an advertising environment and present adaptive profiles, which let users setup profiles based on a self-assessment, and enhance those profiles with information about their real shopping behavior as well as about their activity intensity. Additionally, we explain how pervasive technologies such as Bluetooth can be used to create a profile anonymously and unobtrusively.
    @InProceedings{alt2009ami,
    author = {Alt, Florian and Balz, Moritz and Kristes, Stefanie and Shirazi, Alireza Sahami and Mennen\"{o}h, Julian and Schmidt, Albrecht and Schr\"{o}der, Hendrik and G\"{o}dicke, Michael},
    title = {{Adaptive User Profiles in Pervasive Advertising Environments}},
    booktitle = {Proceedings of the European Conference on Ambient Intelligence},
    year = {2009},
    series = {AmI'09},
    pages = {276--286},
    address = {Berlin, Heidelberg},
    month = {nov},
    publisher = {Springer-Verlag},
    note = {alt2009ami},
    abstract = {Nowadays modern advertising environments try to provide more efficient ads by targeting costumers based on their interests. Various approaches exist today as to how information about the users’ interests can be gathered. Users can deliberately and explicitly provide this information or user’s shopping behaviors can be analyzed implicitly. We implemented an advertising platform to simulate an advertising environment and present adaptive profiles, which let users setup profiles based on a self-assessment, and enhance those profiles with information about their real shopping behavior as well as about their activity intensity. Additionally, we explain how pervasive technologies such as Bluetooth can be used to create a profile anonymously and unobtrusively.},
    acmid = {1694666},
    doi = {10.1007/978-3-642-05408-2_32},
    isbn = {978-3-642-05407-5},
    location = {Salzburg, Austria},
    numpages = {11},
    timestamp = {2009.11.01},
    url = {http://www.florian-alt.org/unibw/wp-content/publications/alt2009ami.pdf},
    }

Reviews

  • CHI 2016 Special Recognition for exceptional reviews
  • CHI 2015 Special Recognition for exceptional reviews
  • CHI 2014 Special Recognition for exceptional reviews