Innovations in artificial neural networks are driving new opportunities in deep learning. Because it powers so many machine learning uses, like self-driving vehicles, chatbots, and medical diagnostics, deep learning skills are in high demand in today’s workforce.

Johns Hopkins students are ready to break into the field thanks to “Machine Learning: Deep Learning,” a course offered through the Department of Computer Science in the Whiting School of Engineering. Created by Mathias Unberath, assistant professor of computer science, the course is grounded in the latest deep learning concepts and techniques.

In the course, undergraduate and graduate students team up to design, implement, and validate deep learning-based solutions to contemporary problems. In addition to technical skills, students learn how to use deep learning responsibility by evaluating a model’s predictions for accuracy and bias.

Instructor Mathias Unberath introduces final projects for “Machine Learning: Deep Learning”

The pandemic posed an additional obstacle this year, requiring students to find new ways to engage and work together virtually, even across time zones. At the end of the spring semester, 27 student teams presented final projects via Zoom, more than any year before, and they covered an even broader range of applications, said Unberath.

Teams designed deep learning models to perform several medical imaging tasks, from providing surgeons with accurate information about the position of blood vessels to diagnosing diseases such as melanoma. 

They trained a bot to master the game of Tic-Tac-Toe, and built a question generator to assist teachers in creating testing materials. One team developed an automated system for detecting fake COVID-19 news on social media.

“It’s so exciting to see these projects come together. Facilitating project work has been particularly challenging in the remote setting because there’s no natural place for students to meet and innovate. We’ve been experimenting with several formats and tools, including Piazza, Discord, and Google Forms, but we are nowhere close to emulating the stimulating environment that is Homewood Campus,” said Unberath, who has taught the popular class for four years and was recently recognized with the Professor Joel Dean Excellence in Teaching Award.

At the end of the project showcase, Intuitive Surgical, a leader in the surgical robotics market, gave two projects a $600 Best Project Award. The 2021 Best Projects winners are:

Project: “WaveNetAutoencoder with Contrastive Predictive Coding for Music Translation”

Team Members: Chester Huynh, Silu Men, David Shi, Maggie Wang

The team designed and trained a model that can translate music from one instrument to another; for example, users could feed the model a cello performance by Yo-Yo Ma and translate it to a piano piece in the style of Beethoven. The team took on the challenge of improving Facebook’s Universal Music Translation model by replacing their wavenet autoencoder with contrastive predictive coding, an unsupervised representation learning method.

“Our goal was to build a model that can translate music in an unsupervised manner, while preserving the content and musicality of the original piece,” said Silu Men, a master’s student in data science.

Even with smaller computing power and less training time, the team’s model had a comparable performance to the Facebook model.

Project: “Synthetic Training for Robust Category-Level Manipulation via Semantic Keypoints”

Team Members: Cora Dimming, Annie Mao, Liza Naydanova, Ryan Rubel

A challenging research topic in robotics is how to teach robots to manipulate objects in the real world. To achieve such capabilities, robots must be able to learn versatile manipulation skills for different objects and situations.

For their project, the team built an object detection algorithm to teach a robot how to pick up a mug and place it on a rack. The model was trained on a synthetically generated dataset – in this case, images of a single 3D mug. The proposed model learns to detect key points on the mug (such as the mug handle) and estimate the pose or position of the object.

“Our approach allows robots to better perceive an object, which will help future robots perform more meaningful tasks besides just picking and placing objects” said robotics graduate student Annie Mao.