Game-changing Assistive Technology: Toward Robotic Leg Control that Interacts with the Brain

May 13, 2024
Part 1 of a three-part series unpacks how a researcher strives to improve performance by merging neuroscience and human motor control with robotics and artificial intelligence.

Why should we be interested in designing machines that think and move like humans?

Two reasons spring to mind, if you ask Dr. Brokoslaw Laschowski, a research scientist and principal investigator at the Toronto Rehabilitation Institute, Canada’s largest rehabilitation hospital

The first objective is automation, or designing autonomous machines, such as walking robots that can see, think and move like humans. The second reason is merging humans with machines. Think of this integration as a way to connect human motor control to a computer, robot or some other mechatronic system, he said. Robotic prosthetics for patients with leg amputations, smart glasses for patients with vision loss and brain-machine interfaces are all examples.

READ MORE: Repetitive Tasks: The Biggest Time-Waster in Manufacturing?

Laschowski, an assistant professor in the Department of Mechanical and Industrial Engineering and the Robotics Institute at the University of Toronto, where he leads the Neural Robotics Lab, applies his education in neuroscience and human motor control to improve health and performance by integrating robotics and artificial intelligence with humans. 

Robotic prosthetic legs and exoskeletons are physical systems for visualizing the research his lab actually specializes in, Laschowski pointed out. “We focus on on learning, optimization, and control of humans interacting with machines,” he said.

Combining Motion Control, Sensor Technology and AI

The layperson’s summary of Laschowski’s research in prosthetics and exoskeletons is to determine the activity the patient wants to do and automate these tasks. This level of automation, referred to as high-level control, involves a fully automated AI controller to determine what type of activity the patient wants to perform. 

For high-level control, we use sensors to record neural activity,” explained Laschowski. “And we use computer vision with sensor fusion and machine learning to decode the patient’s intent. This is then translated to the mid-level controller, which uses reinforcement learning, or optimal control to decide how the patient, or more specifically the robotic leg, should walk from Point A to Point B.” 

The applications of this research vary but could involve helping a patient to see or walk. It may also involve the design of robots for search and rescue, and firefighting. The applications for developing autonomous humanoid robots are boundless, said Laschowski. 

Focus on Optimization and Control

Laschowski has designed several physical robots in the past, but the principal focus of his research is optimization, machine learning, and control. His autonomous controllers use a high-level system that’s responsible for inferring what the robot should be doing. For example, when a patient walks with a robotic prosthetic leg, onboard sensors, such as goniometers or inertial measurement units (IMUs) can be used for automated intent recognition.  

His research frequently involves the use of computer vision. Cameras are strapped to the human and/or robotic leg, and various sensor fusion methods and machine learning models are used to infer what the patient wants to do and where to go, he said. The data is used to select a specific locomotion mode controller.  

READ MORE: Optimizing Pick-and-place with Cartesian Robots

“We discretize human and/or robot locomotion into different controllers,” explained Laschowski. Separate controllers are used for sitting, standing, walking, climbing stairs or walking downstairs. And then there’s a need for high-level switching between these different controllers.  

That’s where artificial intelligence comes in. The data allows researchers to do pattern recognition. Sensor fusion is used to infer what type of activity the patient wants to do before selecting the corresponding controller for that given activity. 

His lab relies on different sensing technology. One is computer vision, whereby cameras sense the walking environment, and the data is used for path planning and control. “This is arguably the area of research that we’re best known for, where we’re trying to develop the Tesla of robotic legs,” said Laschowski.  

Sensors for Neural and Muscle Interfaces

His lab is now getting into neural interfaces, where electroencephalography (EEG), a non-invasive sensor, is used to record brain activity during motor imagery. It means that as the patient thinks about doing some movement, the technology can help researchers decode the neural signals and estimate the patient’s intent. Alternatively, the patient could actively be doing the movement, and the sensors can help decode what the patient is doing before the information is translated to the robotic leg. Surface electromyography (EMG) is another method used to measure neural signals, but at the muscle level. 

Reinforcement Learning and Control Systems

Our high-level AI controller determines the patient’s intended activity. All of this is automated; it is fully autonomous.” 

The robot then decides how to walk from Point A to Point B. This is called mid-level control. There are two methods we use for mid-level control - reinforcement learning and optimal control, explained Laschowski. In the context of walking, for example, a patient walks with the robotic leg. However, getting experimental data can be time-consuming, resource intensive, and dangerous. “We wouldn’t want a patient interacting with that robot,” Laschowski cautioned.  

This is where his research in physics-based computer simulation fits in. “It allows us to design and optimize our controllers very cheaply and reliably, all in simulation,” he said. 

These simulations capture physics and are increasingly used by large tech companies such as OpenAI, Nvidia, Google, and Meta. “We're doing something similar,” said Laschowski.  

Within this reinforcement learning framework, the research team can suggest that the optimal solution should be for the robotic leg to behave similar to a biological leg. This is known as biomimicry.  

In reality, the anthropometrics of a patient with an amputation differ from those of an able-bodied individual, Laschowski said. Since the physics (the system dynamics) are different, the optimal control solution may require a different policy and different biomechanics.  

"Using our simulations, we may one day discover a walking gait that exceeds that of healthy human performance," said Laschowski. "We haven’t gotten there yet. That’s a little bit more of a complicated problem. But right now, we assume, let's program the robot, or have the robot learn how to walk in a way that mimics human walking."

About the Author

Rehana Begg | Editor-in-Chief, Machine Design

As Machine Design’s content lead, Rehana Begg is tasked with elevating the voice of the design and multi-disciplinary engineer in the face of digital transformation and engineering innovation. Begg has more than 24 years of editorial experience and has spent the past decade in the trenches of industrial manufacturing, focusing on new technologies, manufacturing innovation and business. Her B2B career has taken her from corporate boardrooms to plant floors and underground mining stopes, covering everything from automation & IIoT, robotics, mechanical design and additive manufacturing to plant operations, maintenance, reliability and continuous improvement. Begg holds an MBA, a Master of Journalism degree, and a BA (Hons.) in Political Science. She is committed to lifelong learning and feeds her passion for innovation in publishing, transparent science and clear communication by attending relevant conferences and seminars/workshops. 

Follow Rehana Begg via the following social media handles:

X: @rehanabegg

LinkedIn: @rehanabegg and @MachineDesign

Sponsored Recommendations

Crisis averted: How our AI-powered services helped prevent a factory fire

July 10, 2024
Discover how Schneider Electric's services helped a food and beverage manufacturer avoid a factory fire with AI-powered analytics.

Pumps Push the Boundaries of Low Temperature Technology

June 14, 2024
As an integral part of cryotechnology, KNF pumps facilitate scientific advances in cryostats, allowing them to push temperature boundaries and approach absolute zero.

The entire spectrum of drive technology

June 5, 2024
Read exciting stories about all aspects of maxon drive technology in our magazine.

MONITORING RELAYS — TYPES AND APPLICATIONS

May 15, 2024
Production equipment is expensive and needs to be protected against input abnormalities such as voltage, current, frequency, and phase to stay online and in operation for the ...

Voice your opinion!

To join the conversation, and become an exclusive member of Machine Design, create an account today!