Hello & welcome to my space! Allow me to introduce myself, my name is Karen Tatarian. I am passionate and curious about all things related to robotics, artificial intelligence/machine learning, human-centered interactive technologies, and product.

Currently, I am a robotics engineer and researcher at SoftBank Robotics (SBR) and a PhD Candidate at Sorbonne University, part of the Institut des Systèmes Intelligents et de Robotique (ISIR) lab. I get to be full-time based at SBR as I complete my PhD studies through my Marie-Curie ITN fellowship funded by the European Commission’s H2020 projects . My work focus is on human-robot interaction and to be more precise my thesis work is titled “Synthesis of multi-modal social intelligence for human-robot interaction”. I hold a Bachelor of Science in Physics and a Master in Engineering in Mechanical Engineering with a focus on robotics, control, and automation.

Aiming to bridge the gap between the industry and academia, I am the lead organizer of a conference workshop series Solutions for Socially Intelligent HRI in Real-World Scenarios”  (SSIR-HRI). In addition, being part of an EU funded project with international partners, I got to travel and work in over 7 countries all over the world from Japan to Sweden and many more. During these travels, I collaborated with professionals in the field on various topics including but not limited to Virtual Characters and Computer Game Technologies, robots for Digital Learning, and Computational Social and Behavioral Science.

An advocate and spokesperson for women in the STEM and AI fields, I try my best to be the change I want to see in the world. For this reason, I allied with the United Nations (UN) and the International Telecommunication Union (ITU) to work and promote UN’s SDG 5 goal at the WSIS forum. In addition, I have worked closely and hands on with organizations, such as Women in AI and Elles Bougent, organizing international events and classes to teach young girls to code .

Central to the success of interactive products are adaptive and intelligent interactions between humans and the technology. In my recent work, I investigate, design, and develop multi-modal behaviours for social robots including the combination of various gestures, gaze mechanisms, and social navigation to allow the robot to influence interactions and be perceived social intelligence of the robot. I am also currently working on how the robot can use human behavior in an interaction as feedback to adapt and personalize the interaction and its outcomes using Reinforcement Learning. Projects: combining robotic modalities for perceived social intelligence, estimation & adaptation of group conversational roles, and personalization and adaptation for socially intelligent interactions.

In this space, you can discover some of the projects I have worked on during my graduate and undergraduate studies and internships as well as my publications. You can also find more information about events and media I participated in. You can also find me on social media: Twitter, ClubHouse, LinkedIn, and Github