AROB Orgenized Sessin: Human-Centered Robotics
Organizer: Dr. Sajid Nisar (Kyoto University of Advanced Science, Japan)
Zonghe ChuaAssistant Professor, Department of Electrical, Computer, and Systems Engineering Case Western Reserve University - USA
Telesurgical Skill Enhancement Through Visuohaptics and Error Amplification
Surgeon skill is strongly linked to improved postoperative patient outcomes. However, in minimally-invasive telesurgery, there is a long learning curve with surgical trainees needing to adapt to diminished depth perception, limited haptic feedback, and the unique dynamics of the surgeon-side manipulanda with few opportunities to undergo time-intensive expert coaching. These factors motivate a need for automated training methods to accelerate the development of key components of telesurgical skill in trainees. Robotic telesurgical platforms, like the da Vinci Surgical System are uniquely suited to deliver such training experiences as they can measure the kinematics of their operator and provide high-fidelity visual and haptic feedback to facilitate skill learning. In this talk, I will discuss training approaches to improve (a) movement dexterity and (b) force control of operators using haptic feedback from a telesurgical robot, as well as new vision-based deep learning methods to provide such haptic feedback.
Zonghe Chua is an assistant professor at Case Western Reserve University in the Department of Electrical, Computer, and Systems Engineering. He currently directs the Enhanced Robotic Interfaces and Experiences Lab which develops new approaches to integrating multimodal user and environmental sensing with smart algorithms to provide multisensory feedback to the user that can enhance skill acquisition and real-time performance during teleoperation. He received his BS from the University of Illinois at Urbana-Champaign in 2015, and his MS and PhD from Stanford University in 2020 and 2022 respectively, in mechanical engineering. While at Stanford he worked at the Collaborative Haptics and Robotics in Medicine Lab and was a Stanford Bio-X Luber Stryer Interdisciplinary Fellow, an Intuitive Surgical Student Fellow, and a young National University of Singapore Fellow. His work on neural network-based visual force estimation and haptic feedback was a best overall paper nominee at IROS 2022.
AROB Orgenized Sessin: Recent Natural Language Processing Models and Applications
Organizer: Prof. Hidekazu Yanagimoto (Osaka Metropolitan University, Japan)
Kiyota HashimotoPrince of Songkla University, Thailand
Low-resource languages and recent deep learning technologies
Recent deep learning technologies have achieved superior results for many natural language processing tasks in English and other major languages. There are, however, many languages, low-resource languages, for which enough amount of learning data, basic preprocessing tools, and foundational linguistic knowledge are not well available. Among them, some low-resource languages including Indonesian, for example, have been successfully coped with at least partially with seq-to-seq approaches while other languages including Thai and Burmese have been waiting for more advancement. In this presentation, some difficulties of low-resource languages are explained, and several ways to overcome such difficulties are introduced. Based on them, some recent achievements particularly in Thai, Burmese, and Indonesian are discussed.
Professor at Prince of Songkla University, Thailand. D.Eng (Nara Institute of Science and Technology). He has worked for natural language processing research more than 20 years, and one of his recent research interests is handling low-resource natural languages such as Thai, Indonesian, Burmese, which lack well-designed corpora for machine learning. Until 2015, he worked at several universities in Japan, including Osaka Prefecture University. Currently he is also Collaborative Professor at Kanazawa University, and serves as associate editor of some journals.