In the past few years, we have covered a few robots that can feed people with disabilities. Teaching robots how to use a fork is not always straightforward. This video from Standford ILIAD shows a zero-shot framework to “sense visuo-haptic properties of a previously unseen item and reactively skewer it, all within a single interaction.”
More like this ➡️ here
Priya Sundaresan's talk on "Learning Visuo-Haptic Skewering Strategies for Robot-Assisted Feeding"
According to the researchers, across 6 plates of different food items, this approach yielded 71% success rate.
*Our articles may contain aff links. As an Amazon Associate we earn from qualifying purchases. Please read our disclaimer on how we fund this site.