Artificial Intelligence

Translating Videos to Commands for Robots with Deep Recurrent Neural Networks

🤖 check out more robots like this

Researchers are always looking for new ways to make it easier to teach robots new commands. Anh Nguyen, Dimitrios Kanoulas, Luca Muratore, Darwin G. Caldwell, Nikos and G. Tsagarakis have presented a paper that explores translating videos to commands with deep recurrent neural networks. The framework extracts features from the video and uses two RNN layers with an encoder-decoder architecture to encode the visual features and generate the commands.

[Paper]

Currently trending gadgets:

*Our articles may contain aff links. As an Amazon Associate we earn from qualifying purchases. Please read our disclaimer on how we fund this site.

Share