The objective of Hana Kopecka’s project is to study whether users have different expectations and preferences for robot explanations based on the appearance of the robot. As Artificial Intelligence (AI) continues to expand into various areas of human life, the need for explainable AI becomes crucial. Explainable AIs are systems, either software or physical robots, capable of explaining their decision-making processes to users to ensure transparency, assess data privacy and fairness, foster user trust, and help users to feel in control of the technology. Several user characteristics are known to affect their preferences for explanations, and it is recognized that the robots' appearance can affect users' mental models, trust, and perceived ability. However, little is understood about how the appearance of robots may influence user explanation preferences. To address this gap, Kopecka will conduct an empirical study to explore the impact of robot appearance on user explanation preferences.