Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Making sense of vision and touch: #ICRA2019 best paper award video and interview
#1
Making sense of vision and touch: #ICRA2019 best paper award video and interview

PhD candidate Michelle A. Lee from the Stanford AI Lab won the best paper award at ICRA 2019 with her work “Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks”. You can read the paper on arxiv here. Audrow Nash was there to capture her pitch. And here’s the official […]


https://robohub.org/making-sense-of-visi...interview/
Reply


Forum Jump: