Objectives

The workshop will focus on a new class of sensors where some form of visual sensor is used in order to perceive the contact state between a robot and the environment (i.e. to “see touch”). Considering the high-resolution pixel-based tactile information provided by these sensors, we wish to explore how they can be used towards robust manipulation via both model-based and learning-based approaches, consider what kinds of physical measurements can be performed or inferred from visuotactile information, and discuss what potential improvements should be made to current sensors and techniques associated with them.

We invite you to contribute and to participate in this workshop.


Confirmed Speakers:

Workshop Schedule

https://docs.google.com/spreadsheets/d/1Jrzzdn7n9j6d9JD34LtG1EIFE06aM1eibcmkg_RnYhU

Speakers


Ted Adelson

Ted Adelson is well known for contributions to multiscale image representation (such as the Laplacian pyramid) and basic concepts in early vision such as motion energy and steerable filters (honored by the IEEE Computer Society’s Helmholtz Prize, 2013). His work on the neural mechanisms of motion perception was honored with the Rank Prize in Optoelectronics (1992). His work on layered representations for motion won the IEEE Computer Society’s Longuet-Higgins Award (2005). He introduced the plenoptic function, and built the first plenoptic camera. He has done pioneering work on the problems of material perception in human and machine vision. He has produced some well known illusions such as the Checker-Shadow Illusion. Prof. Adelson has recently developed a novel technology for artificial touch sensing, called GelSight, which converts touch to images, and which enables robots to have tactile sensitivity exceeding that of human skin.

Christopher G. Atkeson

I am a Professor in the Robotics Institute and Human-Computer Interaction Institute at Carnegie Mellon University. My life goal is to fulfill the science fiction vision of machines that achieve human levels of competence in perceiving, thinking, and acting. A more narrow technical goal is to understand how to get machines to generate and perceive human behavior. I use two complementary approaches, exploring humanoid robotics and human aware environments. Building humanoid robots tests our understanding of how to generate human-like behavior, and exposes the gaps and failures in current approaches.

build-baymax.org

Tapomayukh Bhattacharjee

Tapomayukh "Tapo" Bhattacharjee is an NIH Ruth L. Kirschstein NRSA postdoctoral research associate in Computer Science & Engineering at the University of Washington working with Professor Siddhartha Srinivasa in the Personal Robotics Lab. He completed his Ph.D. in Robotics from Georgia Institute of Technology under the supervision of Professor Charlie Kemp, received his M.S. from Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea, and B.Tech. from National Institute of Technology, Calicut, India. He also worked as an R&D Lab Associate at Disney Research, Los Angeles and as a Visiting Scientist at the Interaction and Robotics Research Center, Korea Institute of Science and Technology (KIST), Seoul, South Korea. His primary research interests are in the fields of human-robot interaction, haptic perception, and robot manipulation. His research revolves around the theme of leveraging physical interactions with objects and humans in unstructured environments to enable assistive care for people with mobility limitations. His work has won Best Technical Advances Paper Award at HRI and Best Demonstration Award at NeurIPS.

Roberto Calandra

Roberto Calandra is a Research Scientist at Facebook AI Research. Previously, he was a Postdoctoral Scholar at the University of California, Berkeley (US) in the Berkeley Artificial Intelligence Research Laboratory (BAIR). His education includes a Ph.D. from TU Darmstadt (Germany), a M.Sc. in Machine Learning and Data Mining from the Aalto University (Finland), and a B.Sc. in Computer Science from the Università degli studi di Palermo (Italy). His scientific interests focus on the conjunction of Machine Learning and Robotics, in what is known as Robot Learning.

Robert Haschke

Robert Haschke received the diploma and PhD in Computer Science from the University of Bielefeld, Germany, in 1999 and 2004, and since then is working at the intersection of robotics and neural learning approaches. Robert is currently heading the Robotics Group within the Neuroinformatics Group at CITEC, Bielefeld University, striving to enrich the dexterous manipulation skills of bimanual robot setups through interactive learning and multi-sensory feedback. His fields of research include neural networks, cognitive bimanual robotics, grasping and manipulation with multi-fingered dexterous hands, tactile sensing, and software integration.

Alberto Rodriquez

Alberto Rodriguez is an Associate Professor at the Mechanical Engineering Department at MIT. Alberto graduated in Mathematics ('05) and Telecommunication Engineering ('06) from the Universitat Politecnica de Catalunya, and earned his PhD (’13) from the Robotics Institute at Carnegie Mellon University. He leads the Manipulation and Mechanisms Lab at MIT (MCube) researching autonomous dexterous manipulation, robot automation, and end-effector design. Alberto has received Best Paper Awards at conferences RSS’11, ICRA’13, RSS’18, IROS'18, and RSS'19, the 2018 Best Manipulation System Paper Award from Amazon, and has been finalist for best paper awards at conferences IROS’16 and IROS'18. He has lead Team MIT-Princeton in the Amazon Robotics Challenge between 2015 and 2017, and has received Faculty Research Awards from Amazon in 2018, 2019 and 2020, and from Google in 2020. He is also the recipient of the 2020 IEEE Early Academic Career Award in Robotics and Automation.

Kazuhiro Shimonomura

Kazuhiro Shimonomura is a Professor of the Department of Robotics, Ritsumeikan University. He received the Ph.D. degree in electronic engineering from Osaka University, Osaka, Japan, in 2004. Following his study as a Postdoctoral Fellow and an Assistant Professor of Osaka University, he joined the Department of Robotics, Ritsumeikan University as an Associate Professor in 2009, and was promoted to a Professor in 2018. His research interests include development of vision sensors and vision-based systems for intelligent robots. His current research projects include an aerial robot for manipulation tasks, a high-resolution tactile sensor using camera, an embedded computer vision, and an ultra-high-speed imaging.

Organizers


Alex Alspach

Alex designs and builds soft systems for sensing and manipulation at Toyota Research Institute (TRI). He earned his master's degree at Drexel University with time spent in the Drexel Autonomous Systems Lab (DASL) and KAIST's HuboLab. After graduating, Alex spent two years at SimLab in Korea developing and marketing tools for manipulation research. Prior to joining TRI, Alex developed soft huggable robots and various other systems at Disney Research.

Naveen Kuppuswamy

Naveen Kuppuswamy is a senior research scientist at the Toyota Research Institute (TRI). His current research interests are on tactile perception and control for manipulation and soft robotics.

Avinash Uttamchandani

Avinash Uttamchandani is an electrical engineer working on manipulation research at the Toyota Research Institute, focusing on tactile sensing, embedded electronics, and real-time signal processing and controls.

Filipe Veiga

Filipe Veiga is a Postdoctoral Associate at the Computer Science & Artificial Intelligence Lab at the Massachusetts Institute of Technology. His research focuses on exploring how the sense of touch can be used to improve the dexterous manipulation skills of robots.

Wenzhen Yuan

Wenzhen Yuan is an assistant professor at the Robotics Institute, Carnegie Mellon University. Her research is on developing high-resolution tactile sensors, and applying them for robot manipulation and perception.

Related Links

Contact