Plenaries and Keynotes
Tuesday, October 11, 08:30 – 09:25, Grand Ballroom
  Manuela M. Veloso
  Carnegie Mellon University

We research on autonomous mobile robots with a seamless integration of perception, cognition, and action. In this talk, I will first introduce our CoBot service robots and their novel localization and symbiotic autonomy, which enable them to consistently move in our buildings, now for more than 1,000km. I will then introduce multiple human-robot interaction contributions, and detail the use and planning for language-based complex commands, and robot learning from instruction and correction. I will conclude with the robot explanation generation to reply to language-based requests about their autonomous experience.


Manuela M. Veloso is the Herbert A. Simon University Professor in the School of Computer Science at Carnegie Mellon University. She is the Head of the Machine Learning Department, and she has joint appointment in the Computer Science Department and courtesy appointments in the Robotics Institute and Electrical and Computer Engineering Department. She researches in Artificial Intelligence and Robotics. She founded and directs the CORAL research laboratory, for the study of autonomous agents that Collaborate, Observe, Reason, Act, and Learn, Professor Veloso is IEEE Fellow, AAAS Fellow, AAAI Fellow, Einstein Chair Professor, and the co-founder and past President of RoboCup, and past President of AAAI. Professor Veloso and her students research with a variety of autonomous robots, including mobile service robots and soccer robots. See for further information, including publications.

Wednesday, October 12, 08:30 – 09:10, Grand Ballroom  
  Tae Won Lim
  Senior Vice President, Corporate R&D Division, Hyundai-Kia Motors

Recently, there is an increased interest towards advanced technologies in the automotive industry such as autonomous driving and connectivity. There is a sharp increase in the investment in the related technologies. The future mobility will be based on eco-friendly, high-efficiency and high-safety features. Furthermore, conventional powertrain systems based on combustion engines are expected to be rapidly co-utilized or replaced with various kinds of electrical powertrain systems. Modern vehicles are integrated with advanced technologies such as artificial intelligence, environmental perceptibility and V2X communication. In their implementation and usage, these technologies are very similar to those used in the robotics field. Therefore, it is no longer needed to differentiate between automotive engineering and robotics. We are living in the world where the words 'robotics' and 'automotive engineering' are becoming synonyms. Automotive engineering is well-positioned to provide insights on advanced technology it accumulates from mass-produced vehicles. This is a unique advantage of automotive industry in contributing to the robotics research for human-beings. Therefore, it is necessary to synthesize robotics and automotive engineering in order to share their technological benefits for improving overall human society.


Dr. Tae Won Lim is responsible for the advanced research of Hyundai-Kia's R&D Division as a head of Central Advanced Research and Engineering Institute.
After receiving his MS and Ph.D. degree in the Department of Mechanical & Aerospace Engineering, State University of New York at Buffalo, Dr. Lim joined Hyundai Motor Co. in 1991. He has been developing Materials for Powertrain as well as Environmentally Friendly Technologies including fuel cell, battery, three way catalyst, and recycle for 18 years. He had been the leader of Material Development Team for 3 years.
In 2000, Lim took significant roles to establish the Fuel Cell development activities in Hyundai and served as the team leader of Fuel Cell Development team for 6 years.
Lim was promoted to Director and Vice President in 2006 and 2013, respectively and has contributed the development of the hydrogen Fuel Cell Electric Vehicle until 2013.

After the world-first commercialization of Tucson ix 35 Fuel Cell Electric Vehicle in early 2013, Dr. Lim was transferred to Central Advanced Research and Engineering Institute to establish the advanced research activities inside of Hyundai-Kia as the Head of CAREI. Since 2013, Dr. Lim has been leading all research activities in the Next Generation Batteries, Solar Cell, SiC Power Module for EV and HEV, Autonomous Vehicle, Vehicle Ergonomics, Robots and New Materials and Surface Treatment.
Wednesday, October 12, 09:10 – 09:50, Grand Ballroom
  Gill Pratt
  Toyota Research Institute

There are on the order of 1 billion motor vehicles in service around the world, traveling on the order of 10 trillion miles each year. Presently, nearly all of the those miles are driven by human beings, with on the order of 1 million fatalities worldwide per year. Despite this terribly high number of fatalities, dividing fatalities by miles yields a per-mile fatality rate for human driving on the order of 1 fatality per 10 million miles worldwide (it is on the order of 1 fatality per 100 million miles in developed countries). If fatalities caused by drunk, distracted, and drowsy driving are excluded, the reliability of human driving in developed countries is on the order of 1 fatality per billion miles.
Autonomous driving has been discussed in the media as promising improvements in safety, access and convenience, a lowering of traffic, and an improvement of the environment. These are wonderful, and real, benefits that autonomous cars would bring. But how reliable must autonomous cars be before they actually improve safety? To improve upon average non-drunk, non-distracted, and non-drowsy human driving in developed countries, autonomous vehicles must cause less than 1 fatality per billion miles. There is reason to believe that to be socially accepted, autonomous cars must actually be significantly safer than this. Creating an autonomous car with this level of safety is quite difficult. Luckily, we can improve safety, access, convenience, traffic, and the environment on the way to autonomous driving in ways that are synergistic with the development of self-driving cars. This talk will describe Toyota Research Institute's approach to the problem. .


Dr. Gill Pratt is the Chief Executive Officer of Toyota Research Institute (TRI), a research and development enterprise designed to bridge the gap between fundamental research and product development. Launched in 2016, TRI's mission is to enhance the safety of automobiles, with the ultimate goal of creating a car that is incapable of causing a crash. It seeks to provide increased access to cars for those who otherwise cannot drive, including those with special needs and seniors. Furthermore, TRI looks to translate outdoor mobility technology into products for indoor mobility, and accelerate scientific discovery by applying techniques from artificial intelligence and machine learning. Dr. Pratt also serves as the Executive Technical Advisor to Toyota Motor Corporation. Before joining Toyota, Dr. Pratt served as a program manager in the Defense Sciences Office at the US Defense Advanced Research Projects Agency (DARPA) from January 2010 through August 2015.

Dr. Pratt's primary interest is in the field of robotics and intelligent systems. Specific areas include interfaces that significantly enhance human/machine collaboration, mechanisms and control methods for enhanced mobility and manipulation, low impedance actuators, and the application of neuroscience techniques to robot perception and control. Dr. Pratt holds a Doctor of Philosophy in electrical engineering and computer science from the Massachusetts Institute of Technology (MIT). His thesis is in the field of neurophysiology. Dr. Pratt was an Associate Professor and Director of the Leg Lab at MIT. Subsequently, he became a Professor at Franklin W. Olin College, and before joining DARPA and then Toyota, was Associate Dean of Faculty Affairs and Research. Dr. Pratt holds several patents in series elastic actuation and adaptive control.

Thursday, October 13, 08:30 – 09:25, Grand Ballroom
  Guang-Zhong Yang, PhD, FREng, FIEEE
  Director and Co-Founder, The Hamlyn Centre, Imperial College London

Empowering robots with human intelligence represents one of the ultimate goals in robotics research and recent advances in AI and machine learning are challenging our traditional approaches to robot intelligence. Rather than relying mainly on high-level knowledge abstraction and environment modeling; learning and adaptation are now at centre stage, focusing on seamless transfer of knowledge between the human and robot. The purpose of this talk is to outline the main concepts and practical examples of perceptual docking for harmonizing human robot interaction. It represents a major shift of perceptual learning and knowledge acquisition for robotic systems as operator specific motor and perceptual/cognitive behaviours are acquired in situ through human-robot interaction. The talk will use robotic surgery as an example to illustrate different ways of gauging the underlying decision process of the operator and how human intension and skills can be learnt in real-time by robots. It will demonstrate the use of gaze contingent control, motor channelling, and mutual learning for controlling a range of platforms, from master-slave systems, continuum robots to micro-manipulators. The regulatory, ethical and legal barriers imposed on interventional surgical robots give rise to the need of a tightly integrated control between the operator and robot when autonomy is pursued. The use of perceptual docking naturally avoids some of the major technical hurdles of existing approaches and the talk will also illustrate how this concept can benefit other applications of robotic control beyond surgery.


Professor Guang-Zhong Yang (FREng, FIEEE, FIET, FAIMBE, FIAMBE, FMICCAI, FCGI) is director and co-founder of the Hamlyn Centre for Robotic Surgery (, Deputy Chairman of the Institute of Global Health Innovation, Imperial College London, UK. He is a Fellow of the Royal Academy of Engineering, fellow of IEEE, IET, AIMBE, IAMBE, MICCAI, and City & Guilds. He is a recipient of the Royal Society Research Merit Award and listed in The Times Eureka ‘Top 100’ in British Science.
The Hamlyn Centre has been established for developing safe, effective and accessible imaging, sensing and robotics technologies that can reshape the future of healthcare for both developing and developed countries. Focusing on technological innovation but with a strong emphasis on clinical translation and direct patient benefit with a global impact,the centre is at the forefront of research in imaging, sensing and robotics for addressing global health challenges associated with demo-

graphic, environment, social and economic changes. The Centre plays an active role in international collaboration and outreach activities, as well as in the training of surgeons and engineers in robotic technologies, thereby facilitating a fully integrated clinical approach.
Wednesday, October 12, 16:05 – 16:30, Grand Ballroom
  Fumio harashima
  Tokyo Metropolitan University

Fumio Harashima spent more than three decades pushing the boundaries of power electronics and mechatronics and robotics in his lab at the University of Tokyo. At the same time, he was volunteering for IEEE and recruiting thousands of new members in Japan and throughout Region 10, the Asia Pacific region. Harashima was the first member outside the United States to become president of the IEEE Industrial Electronics Society, a post he held in 1986 and 1987. He was IEEE secretary in 1990, and through the early part of that decade he was on several IEEE boards. At the same time, membership in IEEE Region 10 increased by some 10,000 members and participation in the local Industrial Electronics Society chapter skyrocketed. Harashima also has helped establish new IEEE societies, conferences, and journals. For his dedication to IEEE, he was awarded the 2015 Haraden Pratt Award “for outstanding leadership in globalization and diversity of IEEE communities.”

Before he became an IEEE volunteer, Harashima’s innovative research laid the foundation for mechatronics, a field that merges mechanical engineering, electronics, and computer science. The field has led to invaluable inventions including industrial and medical robots as well as antilock brakes, cruise control, and other automotive technologies. In his lab at the University of Tokyo in the late 1960s, Harashima explored cutting-edge concepts in power electronics. In the early 1970s, he was one of the first researchers in the world to apply microprocessors, then a nascent technology, to motor drives, a core component of manufacturing equipment, vehicles, and robots. His work has had a tremendous impact on manufacturing and industrial automation. His groundbreaking insights that power electronics, robotics and mechatronics systems could be treated as similar mathematical systems gave researchers new ways of making intelligent machines. He advanced robotics by working on computerized motors and controllers for robotic arms; on visually controlled robotic systems that track and then grasp objects moving on a conveyor belt; and on collision-free navigation for mobile robots. He also helped establish the discipline of human-adaptive mechatronics, which involves designing intelligent machines that adapt to the skill level of their human operators. An example is a prosthetic arm that relies on sensor data and brain signals to adjust its motion. Harashima became acquainted with IEEE while eagerly reading its transactions in the University of Tokyo library as an electrical engineering undergrad. “I was deeply impressed by the world-leading research topics,” he says. “However, as a student, I could not afford the IEEE membership dues.” He joined once he became a professor at the university, and in 1974 he attended his first IEEE conference, the Annual Conference of the IEEE Industrial Electronics Society (IECON), in Philadelphia. At the time, a major focus at IECON was the industrial application of micro- and minicomputers, Harashima’s research topic. His experience at the conference sparked a lifelong dedication to IEEE. Harashima invited researchers from Hitachi, Mitsubishi Electric, Toshiba, and other Japanese manufacturers to join IECON activities—which helped increase conference attendance and resulted in new IEEE members. As a researcher, he traveled to universities and laboratories around the world, and he regularly invited scientists from other countries to visit his lab. “Researchers must understand different cultures, and respect them,” he says. “IEEE provided me with excellent opportunities to pursue global research. I joined nine IEEE societies so that I could build professional relationships with people from different backgrounds and experiences.” He eagerly took on leadership roles in the societies, prompted, he says, by a desire to help determine the direction of technology. His leadership as president of three different universities, as well as the IEEE Industrial Electronics Society, the Institute of Electrical Engineers of Japan, and as chair of the IEEE Tokyo Section and Japan Council has helped grow Region 10. The region now has nearly 74,000 members, roughly 44,000 more than when he started volunteering. Harashima helped establish the IEEE Power Electronics Society and the IEEE Robotics and Automation Society, and he helped launch two conferences: the IEEE International Conference on Emerging Technology and Factory Automation and the IEEE/RSJ International Conference on Intelligent Robots and Systems, which is now one of the largest robotics conferences. In 1995, he was the founding editor in chief of IEEE Transactions on Mechatronics. In honor of his contributions, the Intelligent Robots and Systems Fumio Harashima Award for Innovative Technologies was established in 2007 by the IEEE Robotics and Automation Society. The annual award is given to a researcher who has “pioneered activities in robotics and intelligent systems.”
Harashima was born in Tokyo in 1940. “My only hobby when I was a child was reading books,” he says. He spent several hours a day at the school library, aspiring to write novels. In high school, his parents and teachers told him his talents lay more in math and science. Realizing that being a writer would not get him far in what, after World War II, was one of the world’s poorest countries, he decided to study engineering. He earned bachelor’s, master’s, and doctoral degrees in electrical engineering from the University of Tokyo in 1962, 1964, and 1967. His doctoral research was in servomotor control, the use of error-sensing feedback signals to correct the performance of motors, such as controlling their speed and position. After he earned his Ph.D., Harashima was hired as a professor at the University of Tokyo Institute of Industrial Science. He became president of the Tokyo Metropolitan Institute of Technology in 1998. In that role, he led the nationally funded Interaction and Intelligence Project, which ran from 2000 to 2005. The project aimed to establish parameters for a society in which humans would live in close contact with intelligent machines and robots. Next came a stint as president of Tokyo Denki University, from 2004 to 2008. There he initiated the Human Adaptive Mechatronics program, which focused on technological development and the cultivation of talented researchers. He left Tokyo Denki to join Tokyo Metropolitan University as president and retired in March 2015. Mechatronics and robotics is an interdisciplinary field that requires not only technical knowledge but also an understanding of psychology and social science. To that end Harashima has encouraged young engineers to have as broad an academic background as possible, and never to stop pursuing their dreams. He himself is doing just that: He is studying archeology as an undergrad at Nara University, in Japan. During his acceptance speech for the Haraden Pratt Award at the 2015 IEEE Honors Ceremony in June, he joked, “It’s the first time IEEE presented one of its major awards to an undergraduate student.” “Receiving the Haraden Pratt Award just after I retired from my professional life at the age of 75 is a great honor and encouragement,” he says. “I am enjoying my new life as a student of archeology.”
Wednesday, October 12, 16:30 – 16:55, Grand Ballroom 
  Gerd Hirzinger
  Former Director of DLR’s Robotics and Mechatronics Center

The talk first explains our early ideas on how sensor-controlled robots in close cooperation with humans and in applications like assembly should perform and react. Soon after improving industrial robot dynamics using the first powerful PC-processors turned out to become a major surprising success. Being part of DLR (German aerospace research establishment), space robotics has then much influenced our work in the robotics and mechatronics center RMC. Before and after ROTEX, the first remotely controlled space robot (followed by two other exciting space robot experiments), we enforced two scientific-technical areas: the development of impedance joint-torque- controlled ultra-light weight robots, which (after many inspiring discussions with dynamics and control pioneer Oussama Khatib) to me appeared as the ultimate solution in future robotics including robonauts, and the development of delay-compensating telepresence techniques with shared autonomy concepts. Telepresent minimally invasive surgical systems with force-reflection and heart motion compensation resulted from the latter work. It has been widely confirmed that in 3D-vision our Semi -Global Matching algorithm SGM has not only had a remarkable impact on modern photogrammetry e.g. for modeling landscapes and cities in 3D using aerborn cameras, but realtime versions have pushed forward semi-automatic car (including electromobile) driving in Germany, too and are now entering industrial robotics. Flying robots (helicopters) equipped with manipulating arms had found particular attention in our institute’s work as well as solarelectric unmanned airplanes for the stratosphere, a topic I am now pushing forward by a startup company. Yet the biggest joy I feel presently is the fact that though many robot experts have predicted that safe, high-fidelity joint-torque controlled robots will be too expensive to produce, the contrast has been proven now by our former control specialist Sami Haddadin with his new robot FRANKA and its intuitive programming techniques.


Professor Dr. Ing. Gerd Hirzinger received his Dipl.-Ing. Degree 1969 and the doctor’s degree 1974 from the Technical University of Munich. In 1969 he joined DLR (German Aerospace Center) where he became head of the automation laboratory in 1976 and started robotic activities at DLR. In 1991 he received a joint professorship from the Technical University of Munich. Since 1992 he has been director at DLR’s Institute of Robotics and Mechatronics, which is one of the biggest and most acknowledged centers in the field worldwide, including not only robot development for space and terrestrial applications, but also aircraft control and optimization (including UAV’s and solar-electric stratospheric flight), vehicle technology (e.g. autonomous electro mobility) and medical technology (in particular surgical robots). He was the first one to send a small robot into space (1993 with space shuttle COLUMBIA) and remotely control it from ground.

He has published more than 600 papers in robotics, mainly on robot sensing, sensory feedback, mechatronics, man-machine interfaces, telerobotics and space robotics. His technology transfer efforts have generated several hundred high-tech workplaces in industry. He has rejected a number of offers for chairs in European universities and has received numerous high-ranked national and international awards, e.g the IEEE fellow, pioneer and field awards, the AIAA space automation and robotics award and in Germany among many others the Leibniz-award (Germany’s highest ranked scientific award ) and the order of merit of the federal republic of Germany. He was elected as member of both German science academies (Leopoldina and Acatech). And he is know technology advisor of the Bavarian State Government and co-founder of the high-tech-startups Time_in_the_Box (digital cultural heritage based on 3D-world modelling) and ELEKTRA UAS (unmanned solar electric flight).
Wednesday, October 12, 16:55 – 17:20, Grand Ballroom
  Wayne J. Book
  Georgia Tech, USA

Opportunities: they come from the environment we find ourselves in, and our life depends on how we deal with them.  It is the familiar principle of feedback control. For a given target the path will vary according to the disturbances and noise we encounter, but we determine the reaction to disturbances based on our internal feedback algorithms given the physical and psychological energy to act. Our future opportunities depend on how we deal with the opportunities we choose to pursue and how we pursue them.
Shall I feel I am the luckiest guy in the world, to have the opportunities I have? Or be jealous of others? It seems the best solution is to find the advantages given by your situation, whatever that may be.  Want to be a university professor? Then you should be raised on a small farm. REALLY? After sixty years I realize the advantages of home cooking, hard work, hands-on equipment experience, high expectations, basic education, the opportunity to dream and a little hazing to robustify your feedback control. Another’s advantages may be an urban environment with many interactions, a museum and library down the street, transportation challenges, and perhaps low expectations that you know you can beat (and some home cooking).
At some point in our lives we should realize how lucky we are, and everyone who reads this is really lucky; fortunate beyond all reason. At some point the role we play must adjust to cope with this inequity of most of the world. How will we give a hand up to those who cannot leverage their environment to move up in the world; those who don’t have the basic “energy” from food, education and expectations to convert their environmental challenges to personal progress. We may help those next door, across a border or on the other side of the world. But our luck should not blind us to this role as a responsible human being who is really lucky. 
This talk will explore opportunities we have and some ways we can change the world, a world that can use some changes. Engineers can do that!


Wayne J. Book (M-’74, F-95) received his BSME from the U. Texas Austin, ’69 and Ph.D. in mechanical engineering from M.I.T., Cambridge, MA, ’74.
He is the HUSCO/Ramirez Distinguished Professor Emeritus and a fellow of IEEE, ASME and SME. He has served as Senior Technical Editor of the ASME J. Dynamic Systems, Measurement and Control, on the Management Committee of the IEEE-ASME Transactions on Mechatronics and presently is on the Executive Committee of the ASME Fluid Power Systems and Technology Division. He was the 2013 recipient of the ASME Robert E. Koski medal and the 2004 ASME DSCD Leadership award. His research interests are in robotics, system dynamics and control, fluid power, and haptics, with over 200 reviewed papers as author or coauthor and nine patents. Wayne is currently Chairman of the Board of ServeHAITI, Inc. a 501(c)(3) charity in the U.S.

ServeHAITI is a group of volunteers committed to working in solidarity with the people of Grand-Bois, Haiti to address issues in medical care, clean water, education, nutrition and economic development, using cores values of respect, dignity and sustainability. (
Tuesday, October 11, 13:45 – 14:15, Room #111 
  Tetsunari Inamura
  National Institute of Informatics

Research on high level HRI systems that aim skill transfer, language acquisition, dialogue management, and so on requires large-scaled experience based on social and embodied interaction. However, huge costs for development/maintenance of robots and performing HRI sessions are required to use real robot systems. If we choose virtual robot simulator, limitation on embodied interaction arises between virtual robots and real users. I thus have proposed an enhanced robot simulator that enables multiuser to connect to central VR world, and enables users to join the virtual world through immersive user interfaces. As an example, we propose an application to RoboCup@Home tasks that requires sharing of embodied experience between virtual robots and real users. In this talk, I explain the configuration of our simulator platform and feasibility of the system in several applications including RoboCup@Home.


Tetsunari Inamura received the BE, MS and PhD degrees from the University of Tokyo, in 1995, 1997 and 2000, respectively. He was a Researcher of the CREST program, Japanese Science and Technology Cooperation, from 2000 to 2003, and then joined the Department of Mechano-Informatics, School of Information Science and Technology, University of Tokyo as a Lecturer, from 2003 to 2006. He is now an Associate Professor in National Institute of Informatics, and an Associate Professor in the Department of Informatics, SOKENDAI (The Graduate University for advanced Studies). His research interests include imitation learning and symbol emergence on humanoid robots, development of interactive robots through virtual reality and so on. He received a Funai academic prize in 2013, RoboCup Award from the Japanese Society for Artificial Intelligence in 2013, Young researcher encouraging award from the Robotics Society of Japan in 2008, and so on.

He is a member of IEEE Robotics and Automation Society, IEEE Computational Intelligence Society, and so on.
Tuesday, October 11, 13:45 – 14:15, Room #112
  Jae-Bok Song
  Korea University

A spring is an elastic object used to store mechanical energy. Springs have been widely used in various mechanical systems for centuries, but seldom employed in robots that often require positioning accuracy. In recent years, however, springs have emerged as very cheap but useful elements for robots that require either compliance or safety. Typical examples are variable stiffness actuators, series elastic actuators, safety mechanisms, counterbalance mechanisms and so on. This keynote speech will give an overview of the smart use of springs for robot mechanisms.
A variable stiffness actuator (VSA) can change its stiffness in real time by combining linear springs, some mechanical components and actuators. Among many different types, a hybrid VSA is introduced. A safe joint mechanism (SJM) was developed to ensure safety by abruptly reducing an impact force upon a collision between a robot and a human. Because it is a purely passive device without any sensors or active components, it offers faster response, higher reliability and much lower cost than those based on active compliance. A series elastic actuator (SEA) with a spring at the drivetrain can offer both compliant behavior and enable torque sensing based on the encoders. A counterbalance robot is capable of mechanical gravity compensation. This mechanism can effectively compensate for the gravitational torques required at each joint to support the robot mass for any robot configuration and is extendable to multi DOF robot arms.


Professor Jae-Bok Song received the B.S. and M.S. degrees in mechanical engineering from Seoul National University in Korea, in 1983 and 1985, respectively, and the Ph.D. degree in mechanical engineering from MIT in the USA in 1992. He joined the faculty of the Department of Mechanical Engineering, Korea University in 1993, where he has been a Full Professor since 2002. Dr. Song served as a president of Korea Robotics Society (KROS) and is serving as a vice-president of the Korean Society of Mechanical Engineers. He also served as an Editor-in-Chief for International Journal of Control, Automation and Systems, and as an Editor for International Journal of Intelligent Service Robotics. His current research interests are the design and control of various robot arms including collaborative robot arms, counterbalance robot arms, and so on. Dr. Song has worked in developing active and passive safety mechanisms and variable stiffness actuation systems useful for safe manipulation of service robots and industrial robots.

He has also developed the commercial KUNS (Korea University Navigation System) package for autonomous and dependable navigation for both indoor and outdoor environments. He has transferred more than 15 technologies for commercialization.
Wednesday, October 12, 13:30 – 14:00, Room #111
  I-Ming Chen
  Nanyang Technological University

In many instances of manufacturing tasks that require high dexterity manipulation skills and fine force contact with object surfaces, such as fine electronic part assembly and precision part polishing, the skills of human works are critical. In ultrahigh precision component fabrication, the skills and experiences of the master play a crucial role because the human worker carries out the task using multi-modal interaction, such as visual, tactile, auditory feedback to make real-time adjustment. An industrial robot with hybrid force and position control ability simply cannot mimic such feedback to achieve such precision. In this speech, we will examine how to model, capture, and automate such tacit manufacturing knowledge from the human workers from the perspective of human-robot-environment interaction with an industrial application. The result of this study could have significant impact on the design of next generation of industrial robots. First, we need to understand and quantify skills in humans (trained personnel) during complex assembly/tooling tasks (human-human interaction). Secondly, new task acquisition technology for capturing, processing and characterizing human and tool motion, tool dynamics, and continuous contact interaction between the tool and work piece is needed (human-environment interaction). And lastly, we shall characterize and map human skill onto the robot programming in order to control the robot performing the tasks in a human-inspired manner (human-robot interaction). Along the way we get to the final goal, we would find new paradigms and new design methodology for a new breed of cost-effective human-inspired industrial robots, with sufficient degrees of dexterity and the underlying human-inspired actuation and control strategies, to handle delicate manufacturing tasks that have never been automated before..


Prof. I-Ming Chen received the B.S. degree from National Taiwan University in 1986, and M.S. and Ph.D. degrees from California Institute of Technology, Pasadena, CA in 1989 and 1994 respectively. He has been with the School of Mechanical and Aerospace Engineering of Nanyang Technological University (NTU) in Singapore since 1995. He is currently Director of Robotics Research Centre in NTU. Professor Chen also acts as the Deputy Program Manager of A*STAR SERC Industrial Robotics Program to coordinate project and activities under this multi-institutional program involving NTU, NUS, SIMTech, A*STAR I2R and SUTD. He is a member of the Robotics Task Force 2014 under the National Research Foundation which is responsible for Singapore’s strategic R&D plan in future robotics.  His research interests are in wearable sensors, human-robot interaction, reconfigurable automation, logistics automation and infrastructure robotics. Currently he is senior editor of IEEE Transactions on Robotics.

Professor Chen is Fellow of IEEE and Fellow of ASME, General Chairman of 2017 IEEE International Conference
Wednesday, October 12, 13:30 – 14:00, Room #112
  Jungyun Choi
  Vice President, Samsung Advanced Institute of Technology, Samsung Electronics

Recent advances in artificial intelligence and robotics are taking on ever more cognitive tasks, spreading concerns that the entire job market and economy will be affected by this technology soon. Breakthroughs in human-centered robots that can effectively enhance our lives, however, have less attention although they have higher potential to be readily accepted. Conventional robots may be efficient and robust for routine works but are not suitable for humans to cooperate in unpredictable environments. Rigid, heavy structures and gear based force transmission make robots intrinsically unsafe for human to exist in same space. The purpose of this talk is to address the importance of human-centered design of robots and commercialization effort that could completely change the life of elderly people and patients. The talk will use a number of such efforts made in Samsung Electronics as examples to demonstrate how fundamental research in robotics progressed into single-port surgical robots and exoskeletons for augmenting physical strength. Computer simulation of human anatomy and neuromuscular system enabled to minimize regulatory and ethical problems when testing such devices on human throughout the research.


Jungyun Choi is the Vice President for Samsung Advanced Institute of Technology, the top tier research establishment of Samsung Electronics. He leads the mechatronics laboratory that focuses on the R&D for human-machine interaction. Previously, he was responsible for the development of single-port surgical robot, and humanoid robots. Jungyun received his MSc. and Ph.D. in Mechanical Engineering from Imperial College London in 1988 and in 1993 respectively. After graduation, he joined Samsung Electronics as an engineer for industrial robot and controller development. He also has an extensive experience in automating manufacturing facilities for LCD/OLED panels and semiconductors of Samsung Electronics worldwide.

Thursday, October 13, 09:35 – 10:05, Room #111
  Dominik Boesl

Robotics will change the world! It will unleash the same if not an even more disruptive and transformational power within the next 50 years as mainstream IT-technology and the Internet have over the last half a century. Nurtured by technological breakthroughs in industrial automation, robotics will exhaustively permeate all domains of the human living realm. Hence, our grandchildren will grow up in a society that is enriched and enhanced by assistive technologies in every imaginable way. Robotics and automation will be tailored into many everyday objects, becoming an integral part of all kinds of appliances. This Generation 'R' will be without fear of these technologies perceiving their beneficial nature - they will grow up as Robotic Natives. This implies, that today's people are already born to become the first society of Robotic Immigrants. Although it is not possible to precisely predict the world of tomorrow, the presented model of the 4 Robotic Revolutions provides a compelling, holistic approach to describe the future phases of robotic evolution, characterizing them according to their technological enablers and underlying interaction paradigms. Unfortunately, all technological disruptions also entail challenges and issues. But, compared to the evolution of the internet, we are facing one big chance: The internet more or less just "happened" – regarding robotics, automation and artificial intelligence we can still address possibly arising issues in advance. Involving self-regulation in the sense of Technology & Robotic Governance, these challenges have to be discussed on a broad, fact based and interdisciplinary level. Nevertheless, in order to enable sustainable technologies and really responsibly drive "Technology for Humanity" (as the IEEE claims), we all have to change the way we are thinking and engage in discourse! The 2nd IEEE IROS Futurist Forum and provide a platform for further exchange and discussions.


Dominik Boesl has been responsible for Innovation and Technology Management at KUKA since he first joined KUKA Laboratories as Head of Corporate Strategy and Member of the Board in 2011. In 2012, he became Corporate Innovation Manager at KUKA AG, directly reporting to the Management Board. His responsibility for innovation and evangelism efforts spans the entire KUKA group. As one of KUKA's Technology Owners (equivalent to other companies' Technical Fellows or Distinguished Engineers), he profoundly contributes to the definition of the group's strategy on "Apps, Cloud & IoT". Dominik graduated with a diploma in Computer Science from the University of Augsburg and an MBA degree from the University of Pittsburgh. In addition to his career, he has constantly been lecturing at different universities, e.g. Munich Technical University (TUM), and is an author of technical and scientific publications. At TUM School of Education, he is researching "Technology & Robotic Governance": the ethical, moral, socio-cultural, -political and -economic implications of technologies, as robotics, automation and artificial intelligence on humankind.

In order to foster the interdisciplinary discourse about the impact of robotics and automation on society and humankind, Dominik is driving the idea of "Robotic Governance", trying to establish a framework for voluntary self-regulation regarding the use of disruptive technologies. The corresponding initiative and discussion platform can be found at He is also engaged in the IEEE RAS FDC Incubation Project on Autonomous Systems and their Societal Impact as well as in the euRobotics Topic Group on Ethical, Social and Legal Implications (ESL) of Robotics, influencing the input to the European roadmaps. Furthermore, Dominik organizes workshops on Technology and Robotic Governance at several conferences, like the IEEE GHTC Conference or the European Robotics Forum and is chairing the annual IEEE IROS Futurist Forum. In 1999, Dominik started his career at Siemens, where he helped establish the foundations of today's mobile ecosystem by bringing the first UMTS broadcasting cell to market, before joining Microsoft Germany in 2005. At Microsoft, he held various leadership positions including national responsibility in developer evangelism. Instead of moving to Seattle for a leadership position in program management at Microsoft Corporation, he decided to join the KUKA group. He is member of the IEEE Robotics and Automation Society Industrial Activities Board, acts as the IEEE/RAS representative in the IEEE Standards Association IoT Steering Committee and serves regularly as judge in innovation and start-up challenges like the IEEE/RAS & International Federation of Robotics (IFR) Invention and Entrepreneurship Award, the IEEE/RAS IROS Entrepreneurship Forum and Start-Up Contest and the European Space Agency's (ESA) Service Robotics Masters Start-up Award. In his spare time, he publishes educational concepts on serious gaming and works as head of a charity organization that maintains AntMe!, one of the world's most successful serious games.
Thursday, October 13, 09:35 – 10:05, Room #112
  In So Kweon

For the intelligent robots to operate in very complex environments, it is essential to have a robust perception system using many different sensors, such as cameras and lidar depth sensors. There have been significant advances in the perception technology for intelligent robots in the last decade. We, however, often encounter many problems in applying the state-of-the-art computer vision solutions to intelligent robots in real-world environments. The failures are mainly due to the lack of robustness of computer vision solutions in extremely difficult conditions, such as bad weather and cluttered backgrounds. For example, the DRC-HUBO+ often failed to detect and grasp objects, such as a drill, a door handle, and a valve, in the DRC missions with the well-known computer vision solutions. In this talk, we present our experiences in developing robust computer techniques for intelligent robots to operate in challenging conditions. Specifically, a simple camera distortion model shows much improvement in the camera-lidar calibration with the sub-pixel accuracy of re-projection error. A fusion algorithm for the camera and lidar sensors successfully aligns the color and depth images with the smallest error, as of today, in the Middlebury benchmarking data. The resulting fused images provide the accuracy required for the position-based DRC-HUBO+ to successfully carry out given missions in the DRC Finals, such as stair climbing, opening door, drilling a hole, and operating a valve. Our novel CNN network, called "AttentionNet", is developed to accurately classify and localize the target objects in the given input image. The AttentionNet-based CNN architecture has been applied to the Classification and Localization task of the ILSVRC. Finally, we present the DRC-HUBO+ robot system with a video clip of the DARPA Robotics Challenge (DRC) Finals


In So Kweon received the B.S. and the M.S. degrees in Mechanical Design and Production Engineering from Seoul National University, Korea, in 1981 and 1983, respectively, and the Ph.D. degree in Robotics from the Robotics Institute at Carnegie Mellon University in 1990. He worked for Toshiba R&D Center, Japan, and joined KAIST in 1992. He is now a KEPCO Chair professor of School of Electrical Engineering and the director for the National Core Research Center – P3 DigiCar Center at KAIST. His research interests include computer vision and robotics. He has co-authored several books, including "Metric Invariants for Camera Calibration," and more than 500 technical papers. He served as a Founding Associate-Editor-in-Chief for "International Journal of Computer Vision and Applications", and had been on the Editorial Board member for "International Journal of Computer Vision" for ten years since 2005. Professor Kweon is a member of many computer vision and robotics conference program committees and had been a general and program co-chairs for several conferences and workshops, including the 2012 ACCV.

Most recently, he becomes a program chair of the 2019 ICCV. Professor Kweon received several awards from the international conferences, including "The Best Paper Award of the IEEE Transaction on CSVT 2014", "The Best Student Paper Runnerup Award in the IEEE-CVPR 2009" and "The Student Paper Award in the ICCAS'2008". He is a member of the KROS and IEEE.
Thursday, October 13, 10:20 – 10:50, Grand Ballroom
  Seth Hutchinson
  University of Illinois

In this talk, I will describe our recent progress in developing fault-tolerant distributed control policies for multi-robot systems. We consider two problems: rendezvous and coverage. For the former, the goal is to bring all robots to a common location, while for the latter the goal is to deploy robots to achieve optimal coverage of an environment.
We consider the case in which each robot is an autonomous decision maker that is anonymous, memoryless, and dimensionless, i.e., robots are indistinguishable to one another, make decisions based upon only current information, and do not consider collisions. Each robot has a limited sensing range, and is able to directly estimate the state of only those robots within that sensing range, which induces a network topology for the multi-robot system. We assume that it is not possible for the fault-free robots to identify the faulty robots (e.g., due to the anonymous property of the robots). For each problem, we provide an efficient computational framework and analysis of algorithms, all of which converge in the face of faulty robots under a few assumptions on the network topology and sensing abilities. .


Seth Hutchinson Hutchinson received his Ph.D. from Purdue University in 1988. In 1990 he joined the faculty at the University of Illinois in Urbana-Champaign, where he is currently a Professor in the Department of Electrical and Computer Engineering, the Coordinated Science Laboratory, and the Beckman Institute for Advanced Science and Technology. He served as Associate Department Head of ECE from 2001 to 2007. He currently serves on the editorial boards of the International Journal of Robotics Research and the Journal of Intelligent Service Robotics, and chairs the steering committee of the IEEE Robotics and Automation Letters. He was Founding Editor-in-Chief of the IEEE Robotics and Automation Society's Conference Editorial Board (2006-2008), and Editor-in-Chief of the IEEE Transaction on Robotics (2008-2013). He has published more than 200 papers on the topics of robotics and computer vision, and is coauthor of the books "Principles of Robot Motion: Theory, Algorithms, and Implementations," published by MIT Press, and "Robot Modeling and Control," published by Wiley. Hutchinson is a Fellow of the IEEE.

Thursday, October 13, 13:30 – 14:00, Room #111
  Wolfram Burgard
  University of Freiburg

Probabilistic techniques for robot navigation have become key enablers in different application domains including self-driving cars, logistics, service robots and mobile manipulation. In this talk, I will present the key techniques for building successfully navigating robots including particle filters and methods for solving the simultaneous localization and mapping (SLAM) problem. I will furthermore describe how these methods have been turned into several effective real-world applications. I will furthermore present probabilistic methods for combining state estimation with action selection approaches. At the very end, I will briefly discuss how recent methods coming from the domain of deep networks can be used for building robustly navigating robots.


Wolfram Burgard is a professor for computer science at the University of Freiburg and head of the research lab for Autonomous Intelligent Systems. His areas of interest lie in artificial intelligence and mobile robots. Wolfram Burgard's research mainly focuses on the development of robust and adaptive techniques for state estimation and control. Over the past years he and his group have developed a series of innovative probabilistic techniques for robot navigation and perception. They cover different aspects including localization, map-building, SLAM, path-planning, exploration, perception and object recognition. Wolfram has published over 300 papers and articles in robotics and artificial intelligence conferences and journals. He is IEEE, ECCAI and AAAI-Fellow. In 2009, he received the Gottfried Wilhelm Leibniz Prize, the most prestigious German research award. In 2010, he received an Advanced Grant of the European Research Council. Since 2012, Wolfram has been coordinator of the Cluster of Excellence BrainLinks-BrainTools funded by the German Research Foundation.

Thursday, October 13, 13:30 – 14:00, Room #112
  Nikos G. Tsagarakis

The mechatronic development of humanoid robots has considerably progressed during the past two decades with various designs based on different actuation technologies, from motorized based systems to hydraulic and soft actuation technologies. Despite though the advancements in the design of motorized humanoids/bipeds significant barriers still remain, preventing the robot hardware (physical structure and actuation) from equalling the performance of human in locomotion and full body motion in terms of physical robustness, power performance and efficiency. Considering the compliant actuation principles and technologies developed at IIT this talk will present their application on the development of COMAN and WALK-MAN humanoid platforms. Details on the actuation and robot design approaches to achieve the necessary performance in terms of physical resilience, high power and fast motion capabilities will be discussed. Recent work on energy efficiency considerations will also be introduced showing the potential of compliant actuation arrangements in reducing the energy consumption of articulated robotic systems.


Nikos G. Tsagarakis is Tenured Senior Scientist at IIT and Head of Humanoid and Human Centre Mechatronics Lab with overall responsibility for the development of Humanoids (cCub, iCub, COMAN, WALK-MAN), Compliant and Variable Impedance Actuation, soft robotic arm/leg systems and wearable assistive devices and exoskeletons for power augmentation. He is an author or co-author of over 250 papers in research journals and at international conferences and holds 14 patents. He has received several awards from international journals and conferences including ICRA, IROS, Humanoids, ICAR, Robio, ICINCO, and WorldHaptics. He has been in the Program Committee of over 60 international conferences and he is Technical Editor of IEEE/ASME Transactions in Mechatronics and on the Editorial Board of the IEEE Robotics and Automation Letters. Since 2013 he is also serving as a Visiting Professor at the Centre for Robotics Research (CORE), at King's College University.

Tuesday, October 11, 13:45 – 16:05, Grand Ballroom
  Jianwei Zhang
  University of Hamburg

In a dynamic and changing world, a robust and effective robot system must have adaptive behaviors, incrementally learnable skills and a high-level conceptual understanding of the world it inhabits, as well as planning capabilities for autonomous operations. Future intelligent control systems will benefit from the recent research on neurocognitive models in processing multisensory data, exploiting synergy, integrating high-level knowledge and learning, etc. I will first introduce crossmodal integration methods for intelligent service robots. Then I will present our investigation and experiments on synergy technique which uses fewer parameters to govern the high DOF of multifinger robot movement. The third part of my talk will demonstrate how an intelligent system like a robot can evolve its model as a result of learning from experiences; and how such a model allows a robot to better understand new situations by integration of knowledge, planning and learning. I will show some integrated results of operational mobile robot platforms with grasping facilities in a restaurant service scenario.


Jianwei Zhang is professor and head of TAMS, Department of Informatics, University of Hamburg, Germany. He received both his Bachelor of Engineering (1986, with distinction) and Master of Engineering (1989) at the Department of Computer Science of Tsinghua University, Beijing, China, his PhD (1994) at the Institute of Real-Time Computer Systems and Robotics, Department of Computer Science, University of Karlsruhe, Germany, and Habilitation (2000) at the Faculty of Technology, University of Bielefeld, Germany. His research interests are service robotics, sensor fusion, service robotics and multimodal machine learning, cognitive computing of Industry4.0, etc. In these areas he has published about 300 journal and conference papers, technical reports, four book chapters and three research monographs. He holds 37 patents on service robot components and systems. He is the coordinator of the DFG/NSFC Transregional Collaborative Research Centre SFB/TRR169 "Crossmodal Learning" and several EU robotics projects.

He has received several awards, including the IEEE ROMAN Best Paper Award in 2002, the IEEE AIM Best Paper Award 2008, the IEEE ROBIO Best Conference Paper Award 2013 and ROBIO Best Paper on Biomimetics 2014. He is the General Chairs of IEEE MFI 2012 and IEEE/RSJ IROS 2015, IEEE Robotics and Automation Society AdCom (2013-2015). Jianwei Zhang is life-long Academician of Academy of Sciences in Hamburg.
  Sergey Levine
  UC Berkeley

The problem of building an autonomous robot has traditionally been viewed as one of integration: connecting together modular components, each one designed to handle some portion of the perception and decision making process. For example, a vision system might be connected to a planner that might in turn provide commands to a low-level controller that drives the robot's motors. In this talk, I will discuss how ideas from deep learning can allow us to build robotic control mechanisms that combine both perception and control into a single system. This system can then be trained end-to-end on the task at hand. I will show how this end-to-end approach actually simplifies the perception and control problems, by allowing the perception and control mechanisms to adapt to one another and to the task. I will also present some recent work on scaling up deep robotic learning on a cluster consisting of multiple robotic arms, and demonstrate results for learning grasping strategies that involve continuous feedback and hand-eye coordination using deep convolutional neural networks.


Professor Sergey Levine is an assistant professor at UC Berkeley. His research focuses on robotics and machine learning. In his PhD thesis, he developed a novel guided policy search algorithm for learning complex neural network control policies, which was later applied to enable a range of robotic tasks, including end-to-end training of policies for perception and control. He has also developed algorithms for learning from demonstration, inverse reinforcement learning, efficient training of stochastic neural networks, computer vision, and data-driven character animation.

  Yuanqing Lin
  Director, Baidu Institute of Deep Learning

Thanks to big data, deep learning algorithms and high-performance computing, artificial intelligence (AI) technologies have gained dramatic developments in the past a few years. At Baidu, AI is the (next) big thing. This talk will present our strategy and practice. Aiming to achieve artificial general intelligence one day, we are taking two directions simultaneously: 1) developing common fundamental technologies ranging from deep learning platform, basic computer vision technologies such as image classification, object detection/tracking, image/video segmentation, etc.; 2) building ultimate-level AI technologies in vertical domains such as food recognition, car/pedestrian detection, etc. The talk will present some case studies with strong emphasis on forming the close loop of algorithms, applications, users and data. These advances in deep learning and computer vision offer up new possibilities not only for mobile/internet apps but also for robotics.


Dr. Yuanqing Lin is now the Director of the Baidu Institute of Deep Learning. Dr. Lin received his Ph.D. degree in Electrical Engineering from University of Pennsylvania in 2008. After that, he joined NEC Labs America as a Research Staff Member, working on feature learning and large-scale classification. In 2010, he led the NEC-UIUC team and won the No.1 place in ImageNet Large Scale Visual Recognition Challenge. In April 2012, he became the head of the Media Analytics Department at NEC Labs America. His team was focusing on two major research directions, large-scale fine-grained image recognition and 3D visual sensing for autonomous driving. In November 2015, he joined Baidu, and his research interests are in machine learning, computer vision and robotics. He served as the Area Chair for NIPS 2015.

Wednesday, October 12, 13:30 – 15:50, Grand Ballroom
  Hyunchul Shim

Unmanned aerial vehicles have undergone rapid development since 1990s. Originally developed for military applications, they have been extremely successful for missions where direct human participation is deemed dull or dangerous. Thanks to their remarkable success, they began to enter the civil airspace for border patrol and aerial surveillance in late 1990s. During the last few years, the field of UAVs are seeing yet another revolution: the advent of small multirotor drones. This new type of drone is affordable yet very capable for aerial photography and even package deliveries. These two types of drones are entering the airspace in two distinct ways: the larger ones are now considered for full integration for civil airspace by International Civil Aviation Organization. The accommodation of small drones are trickier: they need a whole new way of integration, which is now being tackled by a number of researchers. The success of drone is in debt to the advances in computers, sensors, communications and algorithms: they can perform various missions with great flexibility and reliability. It is very impressive to see how a new robotic device is conceived, developed, and evolved into a mature system so that a whole new industry is formed and rules and regulations are created for them. The speaker intends to share his views obtained from his research since 1991 and also his activities as a member of ICAO RPAS Panel. .


Prof. Hyunchul Shim received the B.S. and M.S. degrees in mechanical design and production engineering from Seoul National University, Seoul, Korea, in 1991 and 1993, respectively, and the Ph.D. degree in mechanical engineering from the University of California Berkeley, Berkeley, USA in 2000. From 1993 to 1994, he was with Hyundai Motor Company, Korea. From 2001 to 2005, he was with Maxtor Corporation, Milpitas, CA, USA as Staff Engineer. From 2005 to 2007, he was with the University of California Berkeley as Principal Engineer, in charge of Berkeley Aerobot Team. In 2007, he joined the Department of Aerospace Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejon, Korea, as an Assistant Professor. He is now a tenured Associate Professor. He began his research on autonomous vehicles in 1991 and has been recognized as one of the pioneers in unmanned aerial vehicles for his work on flight control system design, collision avoidance, vision-based aerial mapping and navigation, and anti-drone technologies.

He received a number of awards including the 2nd prize in Global Student Design Competition by National Instrument, USA(2014), Outstanding award by the Minister of Science, ICT and Future Planning of Korean Government, Best paper awards by Qualcomm Inc, Korean Aerospace Industry('09,'14). He is serving as the Director of Intelligent UAV Research Laboratory and also Director of Korean Civil RPAS Research Center. He is serving as advisor for RPAS Panel in ICAO and a member of Global Future Council of World Economic Forum.

  Jianda Han
  Shenyang Institute of Automation, Chinese Academy of Sciences

Exploration of the polar region is of great importance for scientific research on global climate change and the evolution of the earth. However, the tough environment, e.g., low temperature, strong wind, complex terrains, makes human exploration difficult and dangerous. In 2011, the China National High Technology Research and Development (863) Program and National Antarctic Research Expedition (CHINARE) jointly launched a project to design ground mobile and flying robots to conduct large-scale scientific explorations on Antarctica. In this talk, the corresponding techniques, especially the polar rover's prototypes and autonomous control, as well as the preliminary field test results on Antarctica are introduced. The challenging open problems are also summarized as conclusion..


Jianda Han received his PhD on Electrical Engineering from Harbin Institute of Technology in 1998. Currently he is a professor and vice director of the State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences. His research interests include nonlinear estimation and control, control for the autonomy of robots, and robotic system integrations and applications. He currently also serves as a member of the Expert Panel of the Intelligent Robot Division, the National 863 Program of China.

  Daniel Lee
  University of Pennsylvania

Current AI systems for perception and action incorporate a number of techniques: Bayesian state estimation, probabilistic mapping, trajectory planning, and feedback control. I will describe and demonstrate some of these methods on various autonomous systems including wheeled, legged, and flying robots. In order to model variability due to pose, illumination, and background changes, low-dimensional manifold representations have been used for learning in these systems. But how well can such manifolds be processed by neural networks? I will show how notions of linear separability and VC dimension can be generalized from input points to manifolds. This analysis provides theoretical predictions for the capacity and generalization ability of invariant classifiers, and better understanding of the performance of deep neural networks. .


Daniel Lee is the UPS Foundation Chair Professor in the School of Engineering and Applied Science at the University of Pennsylvania. He received his B.A. summa cum laude in Physics from Harvard University and his Ph.D. in Condensed Matter Physics from the Massachusetts Institute of Technology in 1995. Before coming to Penn, he was a researcher at AT&T and Lucent Bell Laboratories in the Theoretical Physics and Biological Computation departments. He is a Fellow of the IEEE and AAAI and has received the National Science Foundation CAREER award and the University of Pennsylvania Lindback award for distinguished teaching. He was also a fellow of the Hebrew University Institute of Advanced Studies in Jerusalem, an affiliate of the Korea Advanced Institute of Science and Technology, and organized the US-Japan National Academy of Engineering Frontiers of Engineering symposium.

As director of the GRASP Laboratory and co-director of the CMU-Penn University Transportation Center, his group focuses on understanding general computational principles in biological systems, and on applying that knowledge to build autonomous systems.
Thursday, October 13, 13:30 – 15:50, Grand Ballroom

  Byung-Ju Yi
  Hanyang University

Efforts to develop and commercialize medical robots (specifically, surgical robots) has been made intensively all over the world. However, there are only a few number of output that make success in commercialization. In light of this fact, this talk would like to discuss about several aspects that enable us to produce fruitful research output and allow translation to industry. Specifically, this talk would like to discuss about how society can contribute to raise engineer in medical robotics research. Education for engineers in medical robotics involves general education program in engineering and clinic. However, there exists another important issue that has been ignored or has not been paid much attention in medical robotics area. That is the education that considers the certification process in medical robotics. Many countries make an effort to get certification when they develop medical equipment (ME). Certification is necessary not only for commercialization but also for user safety. The certification authorities of the United States, Europe, and Korea are FDA, CE, and KFDA, respectively. Typically ME standards are defined by the Quality Assurance System which vouches the manufactured medical equipment being safe and effective. The Korean standard of medical equipment is followed by the International Standard IEC 60601 which is the same as Europe, Japan, and U.S.A. This talk briefly goes over medical equipment certification standards. Prior to development of ME, the grade of the device should be determined first. Usually, ME is classified by four or three grades according to the purpose of use and potential risks. For instance, the VI master-slave robot that inserts a catheter into the patient's vessel is classified as the 2nd grade. This is because the risk to life is low, even though there is some risk to human health in case of breakdown or malfunction. Initially, process of identifying four grades is also explained. Several successful and failed examples for each grade will be addressed. For certification, developers have to adjust to several standards. These standards are for biological safety, electrical safety, and mechanical et al. This talk will take otologic surgical device and vascular intervention robotic system as examples to explain the standards for safety and performance of medical devices. Specially, safety of electric and mechanical aspects will be mainly introduced through modified design examples of medical robots.


1984: Bachelor, Hanyang University, Mechanical Engineering, Korea
1986: Master degree from The University of Texas at Austin, Mechanical Engineering
1991: Ph.D. from The University of Texas at Austin, Mechanical Engineering, USA
1992-1995: Assistant Professor, Department of Mechanical Engineering, Korea University of Education and Technology, Korea
1995 ~: Professor, School of Electrical Engineering, Hanyang University, Korea 2004-2005: Visiting professor, Department of Mechanical Engineering, Johns Hopkin University, USA 2012: Visiting professor, Kyushu University Hospital, Japan.
2014 ~: Director, Center for ICT-based Medical robotics, Hanyang University, Korea

His current research interest is general robotic mechanism theory and its application to medical robotic system. Special interest is in the area of ENT, neurosurgical, and vascular intervention areas. He is currently the president of Korea Society of Medical Robotics and vice president of Korean Robotics Society.
  Makoto Hashizume
  MD, PhD, FACS, Distinguished Professor, Kyushu University

Minimally invasive endoscopic surgery is now recognized as a standard operation in almost all of the surgical fields. Thanks to the development of information and robotic technology, the technical difficulties have been overcome in part, especially in the movement of the instruments and in the operative fields by introducing super-hand with 7 degrees of freedom and 3D endoscope, respectively. However, the surgical application of the robotic surgery is still limited to the pelvic cavity. There are many problems for further development to be solved in clinical situation. Among them education is the most important issue for both engineers and medical doctors to understand the whole process from creating an idea to producing commercial products. The final purpose is to contribute to human healthcare with medical devices which are useful to care the patients. However, the big difference between the industrial devices and medical devices is the contact to human. More attention has to be paid to “safety” rather than “benefit” of the patients. If “cost performance is poorer compared to that of conventional one, hospital director would not allow the each doctor to use or purchase it. When the medical devices are for the operative use, sterilization is often no marked by engineers. They must have an experience to observe the real clinical situation and to understand how the devices are used in OR and what is the needs in clinical situation. According to the current change of real condition, our interests are now being moved on personalized precision medicine based on multidisciplinary computational anatomy as well as on less invasiveness from the view point of quality of life. Integration of medicine and engineering is one of the solutions for the current problems. The ideal circumstance might be where multidisciplinary personnel with the same purpose is working together in the same room of the same hospital at the same time.


1979: Graduate from Kyushu University, School of Medicine (MD) 1984: Graduate from Kyushu University, Graduate School of Medical Sciences (PhD) 1998-1999: Associate Professor, Department of Surgery II, Kyushu University Graduate School of Medical Sciences 1999-: Professor, Division of Disaster and Emergency Medicine, Department of Advanced Medical Initiatives, Kyushu University Graduate School of Medical Science 2003-: Director, Department of Advanced Medicine and Innovative Technology, Kyushu University Hospital 2006-2012: Director, Emergency and Critical Care Center, Kyushu University Hospital 2010-: Director, Center for Advanced Medical Innovation, Kyushu University 2016-: Director, International Research Center for Multidisciplinary Computational Anatomy His current research interest is in development of minimally invasive surgical robotic system and multidisciplinary computational anatomy. He received an official commendation for innovative technology from the Minister of Education, Culture, Sports, Science and Technology in 2006.

  Salvatore J. Brogna
  Executive Vice President, Product Operations, Intuitive Surgical Co.

Robotic assisted surgery has given surgeons open surgery-like capabilities in minimally invasive format. To advance the capabilities of the next generation of robotic assisted surgery, we should challenge ourselves to increase usability and convenience for the operating room staff while improving the information available to surgeons helping them to make faster decisions and achieve better patient outcomes. The next generation of engineers developing robotic assisted surgical systems must be able to apply the latest advancements in technologies (haptics, optics, controls, and mechatronics) with improved application of human factors and usability design. To foster cross-discipline learning for engineers, training pathways designed to promote advanced learning through deep understanding of engineering sciences and experimental methods is essential, so they can best meet the needs of the surgical community. Mr. Brogna will share insights and real company experiences in bridging medical needs with engineering technology for the next generation of surgery.


Salvatore "Sal" J. Brogna joined Intuitive Surgical in 1999 as part of the product design team. He has held the titles of Vice President, Engineering, Senior Vice President, Product Development as in now the Executive Vice President, Product Operations. Prior to joining Intuitive, Mr. Brogna led design and development of complex robotic systems for 15+ years at Unimation, Genesis Automation and Adept Technology. Mr. Brogna holds a B.S. and a M.S. in Mechanical Engineering from Clarkson University in New York.

Thursday, October 13, Korea Trade Exhibition Center 
  Henrik Hautop Lund, Ph.D.
  Technical University of Denmark

Decades of research into intelligent, playful technology and user-friendly man-machine interfaces has provided important insight into the creation of robotic systems and intelligent interactive systems which are much more user-friendly, safer and cheaper than what appeared possible merely a decade or two ago. This is significantly disrupting the industry in several market sectors. This award talk describes the components of the playware and embodied artificial intelligence research that has led to disruption in the industrial robotics sector, and which points to the next disruption of the health care sector. The components include playful robotics, LEGO robots for kids, minimal robot systems, user-friendly, behavior-based, biomimetic, modular robotics and intelligent systems. The insight into these components and the use in synthesis for designing robots and intelligent systems allows anybody, anywhere, anytime to use these systems, providing an unforeseen flexibility into the sectors (e.g. the health sector), which become disrupted with these systems. The Playware ABC concept will allow you to develop life-changing solutions for anybody, anywhere, anytime through building bodies and brains to allow people to construct, combine and create. Indeed, with recent technology development, we become able to exploit robotics and modern artificial intelligence (AI) to create playware in the form of intelligent hardware and software that creates play and playful experiences for users of all ages. Such playware technology acts as a play force which inspires and motivates you to enter into a play dynamics, in which you forget about time and place, and simultaneously become highly creative and increase your skills - cognitive, physical, and social skills. Clinical effect studies in the health sector show that play with the Moto tiles ( have a remarkable effect on functional abilities, including balancing, of older adults. It is shown how many older adults even are able to throw away their walking aids after playing on the Moto tiles


Professor Henrik Hautop Lund, Center for Playware, Technical University of Denmark, is World Champion in RoboCup Humanoids Freestyle 2002, and has more than 175 scientific publications. He has developed shape-shifting modular robots, presented to the emperor of Japan, and has collaborated closely on robotics and AI with companies like LEGO, Kompan, BandaiNamco, Mizuno for the past two decades. He has developed technical skill enhancing football games and global connectivity based on modular playware for townships in South Africa for the FIFA World Cup 2010 (together with footballers Laudrup and Hoegh). Two decades of scientific studies of such playware in the form of playful robotics, LEGO robots for kids, minimal robot systems, user-friendly, behavior-based, biomimetic, modular robotics lead Prof. Lund's students to form the Universal Robots company, which disrupted the industrial robotics sector, and recently was sold for 285 million USD.

Together with international pop star and World music promoter Peter Gabriel, he has develop the MusicTiles app and MagicCubes as a music 2.0 experience to enhance music creativity amongst everybody, even people with no initial musical skills whatsoever, and used for stage performance during Peter Gabriel's tour. He has invented the patented modular interactive tiles ( for playful prevention and rehabilitation, which are implemented in large numbers amongst elderly. He is currently board member of the 20m euro Patient@Home project in Denmark – and partner in the EU projects Human Brain Project and REACH. In all cases, the modular playware technology approach is used in a playful way to enhance learning, creativity and activity amongst anybody, anywhere, anytime.