Chop, chop: Improving food prep with the power of AI
Researchers at CMU combined two vision foundational models—models trained on large visual data sets—to help a robot arm recognize the shape and the type of fruit and vegetable slices.
Imagine that dinner could be prepared by a robot in your own home with just the push of a button. It sounds like something straight out of a far-flung future. Now, researchers at CMU are laying the groundwork for this technology with artificial intelligence.
Alison Bartch, Atharva Dikshit, and Abraham George, under the guidance of Amir Barati Farimani, assistant professor of mechanical engineering, trained a robot arm on different fruit slices and shapes to help improve its accuracy with cutting and moving real pieces of food on a cutting board.
“So I’m cooking and have a cluttered cutting board. I’m reasoning about how to chop and how to select the objects that I want to cut, without cutting things that I don’t want,” said Bartch, a Ph.D. candidate in mechanical engineering. “It’s something that humans are pretty good at. And it’s a surprisingly difficult problem for robots.”
It’s something that humans are pretty good at. And it’s a surprisingly difficult problem for robots.
Alison Bartch, Ph.D. student, Mechanical Engineering
The researchers combined two vision foundational models—models trained on large visual data sets—to help the robot arm recognize the shape and the type of fruit and vegetable slices. The AI would then determine what motion to use on these slices to produce the correct results with 70% accuracy.
One of the models was only trained on whole fruits and vegetables, so the researchers needed to fine-tune the AI to more accurately recognize slices with a relatively small amount of newly collected data. According to Bartch, their fine-tuned framework can potentially be used for other related research; the combined foundational models, however, can be used for a variety of other AI projects in the future.
“That’s just one of the things in a lot of robotics and machine learning—a lot of the data sets are already collected,” said George, also a Ph.D. candidate in mechanical engineering.
Bartch said that the research focuses more on the “practicalities” of cutting food, such as producing the correct number of slices and moving pieces out of the way, making this research more applicable to a real kitchen than other research.
“For this work, we focused more on the pipeline surrounding it, as opposed to the actual chop action,” Bartch said. “You want to pick which object to cut. You want to clear the scene before you cut. After you cut, detecting where the objects are then and continuing to plan. A lot of existing work focused on cutting assumed the position of the vegetable and that there is nothing else in the environment.”
However, it will still be a while before AI will be chopping vegetables in homes. According to Bartch, there is still a lot of research and fine-tuning needed, and their robot still needs some human input. It’s difficult to replicate human reasoning in machines, even with tasks many humans find simple.
“I think there is definitely a really long way to go before it’s in a person’s house.” Bartch said. “Because reasoning about cooking is really difficult. When you want to go into more practical situations, you need the system to be able to reason when the food is done.
Atharva Dikshit, a master’s student in mechanical engineering, also contributed to this research. This research was submitted to the Journal of Intelligent and Robotic Systems.