January 24, 2024, 15:35 - 16:05
OS10 AROB Organized Session: Integration of AI and Robotics for Highly Versatile Robots
|Prof. Tetsuya Ogata (Waseda University / AIST, Japan)
Prof. Kenichi Ohara (Meijo University, Japan)
Tetsuya OgataWaseda University/AIST, Japan
Deep Learning for Robotics: Enhancing Adaptive Perception and Action through Predictive Models
Traditional deep learning faces challenges in labeling and doesn't account for physical factors like friction. We explore "deep predictive learning," inspired by predictive coding, which adapts the model's state and generates motions to reduce prediction errors. This approach is crucial for self-reliant, responsive agents. Our work has already enabled a humanoid robot to fold towels, leading to industry collaborations. Our moonshot project, "AIREC," aims to create a versatile smart robot for care, blending AI and robotics. Unlike existing robots, AIREC won't require a dedicated end-effector, enhancing adaptability. We plan to integrate the deep predictive learning framework to equip AIREC with diverse capabilities, revolutionizing how robots generalize tasks. This strategy aligns with how smartphones consolidate functions, creating new value. Our goal is to empower AIREC to excel in various tasks, marking a significant step in expanding robots' capabilities.
Tetsuya Ogata received the B.S., M.S., and D.E. degrees in mechanical engineering from Waseda University, Tokyo, Japan, in 1993, 1995, and 2000, respectively. He was a Research Associate with Waseda University from 1999 to 2001. From 2001 to 2003, he was a Research Scientist with the RIKEN Brain Science Institute, Saitama, Japan. From 2003 to 2012, he was an Associate Professor at the Graduate School of Informatics, Kyoto University, Kyoto, Japan. Since 2012, he has been a Professor with the Faculty of Science and Engineering, at Waseda University. Since 2017, he is a Joint-appointed Fellow with the Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology, Tokyo. He is currently a member of the director board of the Japan Deep Learning Association (JDLA) since 2017, and a director of the Institute of AI and Robotics, at Waseda University since 2020.
January 25, 2024, 9:00 - 9:30
OS24 SWARM Organized Session: Lunar bases construction and lunar exploration by modular and swarm AI-robots
|Prof. Fumitoshi Matsuno (Osaka Institute of Technology, Japan)
Prof. Jun Morimoto (Kyoto University, Japan)
Kazuya YoshidaTohoku University, Japan
Challenge to Modular and Heterogeneous AI Robot System for Lunar Exploration and Outpost Construction
In this talk, an ongoing advanced research project under the framework of the Japanese “Moonshot R&D Program” for collaborative heterogeneous multi-robot systems for resource exploration and human outpost construction is introduced.
Our proposed project involves the development of a group of diverse robots with modular designs. This is particularly useful for space missions where it may be difficult to deliver new hardware parts and components. With modular designs, the mechanical configuration of the robots can be easily changed by rearranging the components. This allows for on-site self-update of the functionality of the existing robots.
However, this presents a challenge as the controllers need to evolve with the reconfiguration of the robot system to meet up-to-date task requirements in different environments. We will be utilizing state-of-the-art AI technologies to address this challenge.
The project will bring robust and sustainable robotics-based solutions to exploring the Moon and beyond.
Kazuya Yoshida received B. E., M. S. and Dr. Eng, degrees in Mechanical Engineering Science from Tokyo Institute of Technology, Japan, in 1984, 1986, and 1990, respectively. He served as Research Associate at Tokyo Institute of Technology from 1986 to 1994, and Visiting Scientist at Massachusetts Institute of Technology, U.S.A. in 1994. From 1995 to 2003 he was appointed as Associate Professor, and since 2003 he is Full Professor in Department of Aerospace Engineering, Tohoku University, Japan. He is also a co-founder, and current Technology Advisor, of ispace Inc., a start-up company for the lunar resource exploration business. In addition, he has been contributing to space robotics education for international students at International Space University in Strasbourg, France (for Master of Space Studies) and various locations in the world (for Summer Study Programs) as well as in the Interdisciplinary Space Master program at the University of Luxembourg.
January 25, 2024, 13:00 - 13:30
OS1 AROB Organized Session: Adaptable AI-enabled Robots to Create a Vibrant Society
|Prof. Kazushi Ikeda (Nara Institute of Science and Technology, Japan)
Prof. Tetsunari Inamura (Tamagawa University, Japan)
Yasuhisa HirataTohoku University, Japan
Adaptable AI-enabled Robots to Create a Vibrant Society
This talk introduces our Moonshot project which is a project in the National Research and Development (R&D) program in Japan. The Moonshot program promotes high-risk, high-impact R&D aiming to achieve ambitious Moonshot Goals and solve issues facing future society such as super-aging populations. Our project aims to create adaptable AI-enabled robots available in a variety of places. We are now developing a variety of assistive robots called the Robotic Nimbus which can change their shape and form according to the user's condition, environment, and the purpose of the task, and provide appropriate assistance to encourage the user to take independent action. Especially, in this talk, we focus on the human-assistive/human function-enhancing robots in the fields of nursing care and healthcare.
Yasuhisa Hirata is a Professor in the Department of Robotics at Tohoku University, Japan, and a Project Manager of the Moonshot R&D program in Japan. He received his B.E., M.E., and Ph.D. degrees in mechanical engineering from Tohoku University in 1998, 2000, and 2004, respectively. He formerly worked as a research associate and an associate professor at Tohoku University. He was also a visiting researcher at The Universite de Versailles Saint-Quentin-en–Yvelines, France in 2006 and 2012. He served as an AdCom member and vice president of TAB in IEEE RAS. For more than 20 years, he has been doing research on the control of multiple mobile robots in coordination, human-robot cooperation systems, assistive robots, haptics, industrial robots, etc. He has over 200 technical publications in the area of robotics. He received the Best Paper Awards in Advanced Robotics, JSME Journal, RSJ Journal, Fanuc FA Foundation, ROBIO 2004, ICMA 2020, etc.
January 25, 2024, 16:05 - 16:35
OS6 AROB Organized Session: Collaborative AI robots for adaptation of diverse environments and innovation of infrastructure construction (Moonshot program Goal-3)
|Prof. Keiji Nagatani (The University of Tokyo, Japan)
Prof. Kenji Nagaoka (Kyushu Institute of Technology, Japan)
Keiji NagataniThe University of Tokyo, Japan
Innovations in Earthworks: A 3-Year Progress Report on Collaborative AI Robots for Adapting to Diverse Environments and Innovating Infrastructure Construction
We are actively involved in research and development, focusing on 'collaborative AI robots' capable of adapting flexibly to unexpected situations in hazardous environments, such as lunar surfaces and disaster sites. Our vision is that by 2050, these 'collaborative AI robots' will be capable of replacing humans, enabling emergency restoration in natural disasters, and aiding in the construction of lunar bases. This technology will also find applications in constructing and maintaining infrastructure on Earth. To achieve this objective, we have identified three key research and development pillars:
1. Robot Hardware for Earthwork Innovation,
2. Dynamic Collaboration System for Multiple Robots, and
3. Sensor Pod System for Environmental Data Collection.
During this presentation, I will provide a 3-year progress report on our project and discuss the future of innovations in earthworks in the context of robotics.
Keiji Nagatani received his PhD from the University of Tsukuba in 1997. He was a postdoctoral fellow at Carnegie Mellon University from 1997 to 1999, a lecturer at Okayama University from 1999 to 2005, and an associate professor at Tohoku University from 2005 to 2019. Currently, he is a professor at the University of Tokyo since 2019 and also one of the project managers of objective #3 in the Moonshot Research and Development program. His research interest is field robotics, which includes improving traversal ability for all-terrain robots, the autonomy of inspection robots, and intelligent functions for construction machines. He is a Member of the RSJ, SICE, JSME, JSASS and IEEE.
January 26, 2024, 13:00 - 13:30
OS9 AROB Orgenized Sessin: Human-Centered Robotics
|Dr. Sajid Nisar (Kyoto University of Advanced Science, Japan)
Dr. Zonghe Chua (Case Western Reserve University, USA)
Cara M. NunezCornell University, USA
Human-Centered Haptic Devices for Social Communication
During social interactions, people use auditory, visual, and haptic (touch) cues to convey their thoughts, emotions, and intentions. Current technology allows humans to convey high-quality visual and auditory information but has limited ability to convey haptic expressions remotely. However, as people interact more through digital means rather than in person, it becomes important to have a way to be able to effectively communicate emotions through digital means as well. As online communication becomes more prevalent, systems that convey haptic signals could allow for improved distant socializing and empathetic remote human-human interaction. Due to hardware constraints and limitations in our knowledge regarding human haptic perception, it is difficult to create haptic devices that completely capture the complexity of human touch. Wearable haptic devices allow users to receive haptic feedback without being tethered to a set location and while performing other tasks, but have stricter hardware constraints regarding size, weight, comfort, and power consumption. In this talk, I will present how I address these challenges through a cyclic process of (1) developing novel designs, models, and control strategies for wearable haptic devices, (2) evaluating human haptic perception using these devices, and (3) using prior results and methods to further advance design methodologies and understanding of human haptic perception.
Cara M. Nunez is an Assistant Professor in the Sibley School of Mechanical and Aerospace Engineering at Cornell University. She received a B.S. degree in Biomedical Engineering and a B.A. degree in Spanish as a part of the International Engineering Program from the University of Rhode Island, Kingston, RI, USA, in 2016 and a M.S. degree in Mechanical Engineering and a Ph.D. degree in Bioengineering from Stanford University, Stanford, CA, USA, in 2018 and 2021, respectively. She was formerly a Deutscher Akademischer Austauschdienst (DAAD) Graduate Research Fellow at the Max Planck Institute for Intelligent Systems, Stuttgart, Germany, a Robotics Research Intern at the Honda Research Institute, San Jose, CA, USA, and a Cornell Provost Faculty Fellow and Postdoctoral Research Fellow at the Harvard John A. Paulson School of Engineering and Applied Sciences, Cambridge, MA, USA. Her awards include the National Science Foundation Graduate Research Fellowship, the Stanford Centennial Teaching Assistant Award, and the Stanford Community Impact Award. She was a finalist for Best Technical Paper at the 2020 IEEE Haptics Symposium and was named a Rising Star in Mechanical Engineering in 2020. She previously served as the Student Activities Committee Chair and AdCom member and currently serves as the Student Activities Committee Senior Chair and Associate Vice President of the Media Services Board for the IEEE Robotics and Automation Society. Her recent service also includes associate editor for IEEE Robotics and Automation Letters, IEEE Haptics Symposium 2024, and IEEE Robosoft 2024. Her research interests include robotics, haptics, and human-centered design for human-machine interaction, medical applications, augmented and virtual reality, and STEM education, among others.