Some people think that robots are merely villains in science fiction movies. In reality, robots play an increasingly important role in society; they repair satellites, build cars, farm crops, and much more.

While some may fear the rise of the robots, Gopika Ajaykumar, a first-year PhD student in computer science and member of the Johns Hopkins Malone Center for Engineering in Healthcare, instead sees an opportunity for robots and humans to join forces.

Ajaykumar’s research investigates the growing field of human-robot interaction. She’s working toward the development of assistive robots in the Hopkins Intuitive Computing Laboratory, led by John C. Malone Assistant Professor of Computer Science Chien Ming-Huang.

Ajaykumar chose to attend Johns Hopkins because of the university’s strength in assistive robotics and highly collaborative environment.

“I find robotics exciting because it brings together people and ideas from so many different fields. Through the university, and the Malone Center in particular, I have the opportunity to collaborate with people from various disciplines and tackle problems from different perspectives, all while pursuing my passion for applying robotics in ways that can help people,” she said.

Her current project focuses on a robot programming subset known as programming by demonstration (PbD). Traditionally, programming a robot requires an expert with specialized coding skills and knowledge about the intricacies of robotics. But with PbD, even a non-expert can program a robot to complete a specific task by demonstrating the task themselves.

PbD could be a game-changer for human-robot interaction. The ability for an average person to program a robot opens exciting possibilities for how we use robots in everyday life, and is especially promising for healthcare delivery. For example, personal home robots could provide on-demand assistance for aging adults or special needs populations.

Ajaykumar is particularly interested in the potential of the first-person user viewpoint  to enhance PbD. It’s a relatively unexplored area for robotics research, which traditionally utilizes cameras mounted on robots or in the surrounding environment, rather than on human users.

“The question that I am trying to answer: is having access to the first-person viewpoint from a user’s head-mounted camera useful for a robot when it’s trying to learn a task from a user demonstration? I believe that first-person user sensor data can greatly aid robot learning since this data can convey user intention. For example, head motion is a good indicator of user intention. This information could allow a robot to understand underlying concepts in a demonstration that would usually be hidden from the traditional third-person or first-person robot camera view,” said Ajaykumar.

To test her hypothesis, she will collect RGB and depth data from users wearing first-person cameras (similar to GoPros) as they complete task demonstrations. She hopes the sensor data will provide insights, cues, or trends that robots can use when learning from human demonstrations.

“First-person vision is special because the robot can see the world from a user’s unique perspective. Understanding user intention can help the robot figure out how it can best assist a user at any given time. As I continue with this project, I look forward to exploring how first-person vision can be used to augment robot-provided assistance,” said Ajaykumar.

Prior to joining Hopkins, Ajaykumar earned her B.S. in Electrical and Computer Engineering from The University of Texas at Austin. She has received a 2018 National Science Foundation (NSF) Graduate Research Fellowship and the Howard and Jacqueline Chertkof Endowed Fellowship, which supports exceptional Hopkins graduate students in emerging technology fields.