Michelle (Ichinco) Brachman, PhD
Summary
I am an HCI (Human-Computer Interaction) Researcher who has been focused on human-centered AI for the past five years.
I like to understand how people think, design interfaces to improve their experiences, and evaluate those experiences using qualitative and quantitative methods.
Most recently, I was part of the responsible and trustworthy AI team at IBM Research, focused on making generative and agentic AI systems more transparent, trustworthy and reliable.
Research Interests: human-centered AI, responsible AI, AI developer experience, developer/end-user programmer experience, mental models and learning with complex systems.
Projects (more details coming soon!)
Improving transparency and appropriate trust of an agentic AI
Goal: Understand the kinds of information users need to be able to correctly evaluate the accuracy of an agentic AI system's responses.
Methods: Task-based think-aloud user study with 24 participants interacting with an agentic AI chatbot and semi-structured interviews.
Findings: Users often put too much trust into an AI system. They are easily misled by the amount of information available (such as a list of sources). Many users do want to know about the agentic AI's capabilities, limitations, and decision-making processes.
Impact: Influenced the early design of the end-user interface for BeeAI, an open-source platform to build agentic AI systems which won a Fast Company Innovation by Design Honorable Mention.
Discovering current and future generative AI needs of business users
Goal: Learn how knowledge workers in an enterprise context use LLMs and how they would like to use them in the future.
Methods: Survey of 216 knowledge workers and follow-up survey with 107 participants.
Findings: Users currently use LLMs for for creating, finding information, getting advice, and automation. They would like to use them for more complex tasks that require domain knowledge and context.
Impact: Our findings were presented and shared broadly within IBM to inform product and innovation teams and strategy.
Enabling end-users to generate automation flows using transparency and explanation
Goal: Understand how explanations can help end-users create correct automation flows using a natural language to automation system.
Methods: Between-subjects user study of creating automation flows with several variations of explanations or without explanations on Amazon Mechanical Turk (252 participants).
Findings: Providing suggestions of terms to add to an utterance based on others users' inputs helpedusers to repair and generate correct flows more than system-focused explanations.
Impact: Findings were integrated into the design of IBM's AppConnect AI-powered natural language to automation product.
Publications
View and access my full publication history on Google Scholar.
Selected Publications and Patents:
2025
Michelle Brachman, Siya Kunde, Sarah Miller, Ana Fucs, Samantha Dempsey, Jamie Jabbour, Werner Geyer.
Building Appropriate Mental Models: What Users Know and Want to Know about an Agentic AI Chatbot.
Proceedings of the 30th International Conference on Intelligent User Interfaces.
Michelle Brachman, Arielle Goldberg, Andrew Anderson, Stephanie Houde, Michael Muller, Justin D Weisz.
Towards Personalized and Contextualized Code Explanations.
Adjunct Proceedings of the 33rd ACM Conference on User Modeling, Adaptation and Personalization
**Best Paper** Werner Geyer, Jessica He, Daita Sarkar, Michelle Brachman, Chris Hammond, Jennifer Heins, Zahra Ashktorab, Carlos Rosemberg, Charlie Hill
A Case Study Investigating the Role of Generative AI in Quality Evaluations of Epics in Agile Software Development.
Proceedings of the 4th Annual Symposium on Human-Computer Interaction for Work
2024
Michelle Brachman, Amina El-Ashry, Casey Dugan, Werner Geyer.
How Knowledge Workers Use and Want to Use LLMs in an Enterprise Context.
Extended Abstracts of the CHI Conference on Human Factors in Computing Systems
2023
Michelle Brachman, Qian Pan, Hyo Jin Do, Casey Dugan, Arunima Chaudhary, J Johnson, Priyanshu Rai, Tathagata Chakraborti, Thomas Gschwind, Jim A Laredo, Christoph Miksovic Czasch, Paolo Scotton, Kartik Talamadupula, Gegi Thomas.
Follow the Successful Herd: Towards Explanations for Improved Use and Mental Models of Natural Language Systems.
Proceedings of the 30th International Conference on Intelligent User Interfaces
2022
Michelle Brachman, Zahra Ashktorab, Michael Desmond, Evelyn Duesterwald, Casey Dugan, Narendra Nath Joshi, Qian Pan, Aabhas Sharma.
Reliance and Automation for Human-AI Collaborative Data Labeling Conflict Resolution.
Proceedings of the ACM on Human-Computer Interaction (CSCW)
2020
**Best Paper** Gao Gao, Finn Voichick, Michelle Ichinco, and Caitlin Kelleher. Exploring Programmers' API Learning Processes: Collecting Web Resources as External Memory. 2020 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), Dunedin, New Zealand, 2020, pp. 1-10.
2019
Michelle Ichinco and Caitlin Kelleher. Open-Ended Novice Programming Behaviors and their Implications for Supporting Learning . 2019 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), pp 45-53.
Caitlin Kelleher and Michelle Ichinco. Towards a Model of API Learning . 2019 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), pp 163 - 168.
2018
Michelle Ichinco and Caitlin Kelleher. Semi-automatic suggestion generation for young novice programmers in an open-ended context . Proceedings of the 17th ACM Conference on Interaction Design and Children
pp 405-412.
2017
Michelle Ichinco, Wint Hnin, and Caitlin Kelleher. Suggesting API Usage to Novice Programmers with the Example Guru Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp 1105-1117.
Michelle Ichinco and Caitlin Kelleher. Towards Better Code Snippets: Exploring How Code Snippet Recall Differs with Programming Experience IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), pp 37-41.
**Best Paper** Wint Hnin, Michelle Ichinco, and Caitlin Kelleher. An Exploratory Study of the Usage of Different Educational Resources in the Wild. IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), pp 181-189.