Ongoing Research

The purpose of technology should be to liberate us from meaningless work, rather than creating tools that consume our time and attention. As a researcher specializing in innovative human-computer interfaces, I am constantly exploring the future of technology, ranging from designing and developing new interfaces to conducting predictive research on artificial intelligence and human interactions. As a futurist, I am excited to push the boundaries between our physical reality and the virtual world.

Currently, I am a PhD candidate at TUDelft, actively investigating the intersection of Conversational AI and various fields related to design and human experiences. My passion lies in developing high-quality AI-Human relationships, and I am strongly motivated by exploring AI's impact on society. Through my work, I aim to contribute to building more inclusive and equitable futures.

AI + Design


Synthetic Stakeholder

Embodying User Data through Conversational AI

Can we have human-centred design without real humans? With the state-of-the-art in training conversational AI models, designers can potentially chat with a Synthetic User embodying the knowledge, experience, and personality of real-world user groups. Not only do realistic but entirely artificial data offer a layer of privacy between corporations and the end users, but Synthetic Users also don’t get embarrassed, annoyed, tired, or bored. The designer can endlessly interrogate one for insights about the user groups they represent, or simultaneously send out a hundred questions to a hundred synthetic users. Work is underway to evaluate the impact on creativity and performance of the designers when co-designing with these non-human representatives of stakeholders.

Collaborators: Dr. Peter Lloyd, Dr. Senthil Chandrasegaran

artificial users
 

Synthetic Stakeholders on Inclusive Design Decision Making

Sensitize designers to blind spots regarding the lived reality of marginalized groups.

The lack of equity, diversity, and inclusion in society can have significant impacts on the decision-making process of design teams, particularly when they are predominantly composed of individuals from privileged groups. This phenomenon, referred to as privilege hazard, can result in the exclusion of marginalized groups and their perspectives, leading to incomplete and problematic design solutions. Current strategies like participatory design and co-design aim to mitigate this problem, but they are limited by the assumptions and biases of the design team members. To address this issue, this project aims to investigate how artificial personas, powered by large language models, could be integrated into the early phases of the design process to sensitize designers to the lived reality of marginalized groups. The project will focus on the sensitizing and framing phases of design and explore how AI personas can help designers identify and address their blind spots regarding the experiences, worldviews, and needs of marginalized groups. The study will examine the impact of AI personas on the inclusivity of design decision-making processes and evaluate their potential as a tool for enhancing diversity and equity in design.

Collaborators: Dr. Peter Lloyd, Dr. Senthil Chandrasegaran, Anne Arzberger

AI + Museum


Curatorbot

GPT-powered Chatbot for GLAM experiences

As an experimental research tool, the CuratorBot is designed to explore the potential for natural language technology within cultural heritage settings, offering a unique opportunity to interact with rapidly evolving AI systems. Developed to supplement the expertise of museum and library docents, the CuratorBot is an invaluable resource for anyone looking to explore collections with deeper insight and understanding. By utilizing advanced machine learning technologies, the CuratorBot provides a grounded perspective on the opportunities and limitations of this groundbreaking field, offering a glimpse into the future of intelligent systems and their potential impact on our cultural heritage.

Collaborators: Dr. Jeff Love


CuratorBot and Adriaen Coenen’s Visboeck

GPT-powered Chatbot for GLAM experiences

The CuratorBot (CB) is a prototype conversational agent system developed by H. Gu and J.S. Love at TU Delft under the aegis of the Future Libraries Lab. It is an experimental research tool in the form of a system intended to be used to interrogate the ways in which people could interact with rapidly developing natural language technology within cultural heritage settings. The CB has been conceived of as a supplement to docents (tour guides) or curators for people exploring library and museum collections. It is further meant to give some grounding to the extensive debates surrounding applied machine learning technologies, their opportunities and their limitations.

Collaborators: Dr. Jeff Love, Dutch Royal Library


Future of Audio guides

Personalizing Museum Tours through AI-Enabled Conversational Guides

Based on state-of-the-art in Auditory Augmented Reality (AAR), this study envisions developing an egocentric wearable system for enhancing visitor learning and satisfaction of museum content. We hope to deploy AAR as a context-aware, knowledgeable companion to promote a “flow state” through the museum. Different from traditional audio guides, the proposed system will be proactive instead of passive (e.g. pressing a button or scanning a QR code to trigger), adapting to the visitor’s Persona, motivation and goal of the trip. 

Collaborators: Dr. Jeff Love


Designing for Galleries, libraries, archives, and museums (GLAM)

Developing a Creative Thinking Toolkit for Technology-enabled GLAM Experiences

Designing Future Cultural Heritage Experiences will bring together minds from different backgrounds related to cultural heritage, such as researchers and practitioners from the domains of computer science/graphics, design, psychophysics, (art) history, archaeology, and museum studies.

Collaborators: Dr. Jeff Love, Dr. Willemijn Elkhuizen, Dr. Arnold Vermeeren

AI + Death


Future of Bereavement

Delivering AI Companionship in Bereavement & Mourning

The Conversational AI to support in bereavement & mourning project is a research effort to develop a natural language processing system that can provide support to people who are grieving. The goal of the project is to create a system that can offer condolences, advice, and support to people who are grieving, in a way that is natural and humanlike. The project is motivated by the belief that bereavement is a universal experience, and that AI can play a valuable role in supporting people through this difficult time.


 
 

PAST PUBLICATIONS

Disability


Gu. H., Leclercq, C. (2021, June). Compressing information density in audio-visual sensory substitution of blind individuals. In International Conference of Auditory Displays 2021.

Gu, E. H. (2018). Creative Haptic Interface Design for the Aging Population. In Assistive Technologies in Smart Cities.


Sports

Gu, H., Kunze, K., Takatani, M., & Minamizawa, K. (2015, September). Towards performance feedback through tactile displays to improve learning archery. In Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers (pp. 141-144).


Reading

Sanchez, S., Dingler, T., Gu, H., & Kunze, K. (2016, May). Embodied Reading: A Multisensory Experience. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (pp. 1459-1466).


Sanchez, S., Gu, H., Kunze, K., & Inami, M. (2015, September). Multimodal literacy: storytelling across senses. In Adjunct Proceedings of the 2015 ACM international joint conference on pervasive and ubiquitous computing and Proceedings of the 2015 ACM international symposium on wearable computers (pp. 1257-1260).


Gu, H., Sanchez, S., Kunze, K., & Inami, M. (2015, September). An augmented e-reader for multimodal literacy. In Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers (pp. 353-356).