Game-changing Assistive Technology: Toward Robotic Leg Control that Interacts with the Brain

May 13, 2024
Part 1 of a three-part series unpacks how a researcher strives to improve performance by merging neuroscience and human motor control with robotics and artificial intelligence.

Why should we be interested in designing machines that think and move like humans?

Two reasons spring to mind, if you ask Dr. Brokoslaw Laschowski, a research scientist and principal investigator at the Toronto Rehabilitation Institute, Canada’s largest rehabilitation hospital

The first objective is automation, or designing autonomous machines, such as walking robots that can see, think and move like humans. The second reason is merging humans with machines. Think of this integration as a way to connect human motor control to a computer, robot or some other mechatronic system, he said. Robotic prosthetics for patients with leg amputations, smart glasses for patients with vision loss and brain-machine interfaces are all examples.

READ MORE: Repetitive Tasks: The Biggest Time-Waster in Manufacturing?

Laschowski, an assistant professor in the Department of Mechanical and Industrial Engineering and the Robotics Institute at the University of Toronto, where he leads the Neural Robotics Lab, applies his education in neuroscience and human motor control to improve health and performance by integrating robotics and artificial intelligence with humans. 

Robotic prosthetic legs and exoskeletons are physical systems for visualizing the research his lab actually specializes in, Laschowski pointed out. “We focus on on learning, optimization, and control of humans interacting with machines,” he said.

Combining Motion Control, Sensor Technology and AI

The layperson’s summary of Laschowski’s research in prosthetics and exoskeletons is to determine the activity the patient wants to do and automate these tasks. This level of automation, referred to as high-level control, involves a fully automated AI controller to determine what type of activity the patient wants to perform. 

For high-level control, we use sensors to record neural activity,” explained Laschowski. “And we use computer vision with sensor fusion and machine learning to decode the patient’s intent. This is then translated to the mid-level controller, which uses reinforcement learning, or optimal control to decide how the patient, or more specifically the robotic leg, should walk from Point A to Point B.” 

The applications of this research vary but could involve helping a patient to see or walk. It may also involve the design of robots for search and rescue, and firefighting. The applications for developing autonomous humanoid robots are boundless, said Laschowski. 

Focus on Optimization and Control

Laschowski has designed several physical robots in the past, but the principal focus of his research is optimization, machine learning, and control. His autonomous controllers use a high-level system that’s responsible for inferring what the robot should be doing. For example, when a patient walks with a robotic prosthetic leg, onboard sensors, such as goniometers or inertial measurement units (IMUs) can be used for automated intent recognition.  

His research frequently involves the use of computer vision. Cameras are strapped to the human and/or robotic leg, and various sensor fusion methods and machine learning models are used to infer what the patient wants to do and where to go, he said. The data is used to select a specific locomotion mode controller.  

READ MORE: Optimizing Pick-and-place with Cartesian Robots

“We discretize human and/or robot locomotion into different controllers,” explained Laschowski. Separate controllers are used for sitting, standing, walking, climbing stairs or walking downstairs. And then there’s a need for high-level switching between these different controllers.  

That’s where artificial intelligence comes in. The data allows researchers to do pattern recognition. Sensor fusion is used to infer what type of activity the patient wants to do before selecting the corresponding controller for that given activity. 

His lab relies on different sensing technology. One is computer vision, whereby cameras sense the walking environment, and the data is used for path planning and control. “This is arguably the area of research that we’re best known for, where we’re trying to develop the Tesla of robotic legs,” said Laschowski.  

Sensors for Neural and Muscle Interfaces

His lab is now getting into neural interfaces, where electroencephalography (EEG), a non-invasive sensor, is used to record brain activity during motor imagery. It means that as the patient thinks about doing some movement, the technology can help researchers decode the neural signals and estimate the patient’s intent. Alternatively, the patient could actively be doing the movement, and the sensors can help decode what the patient is doing before the information is translated to the robotic leg. Surface electromyography (EMG) is another method used to measure neural signals, but at the muscle level. 

Reinforcement Learning and Control Systems

Our high-level AI controller determines the patient’s intended activity. All of this is automated; it is fully autonomous.” 

The robot then decides how to walk from Point A to Point B. This is called mid-level control. There are two methods we use for mid-level control - reinforcement learning and optimal control, explained Laschowski. In the context of walking, for example, a patient walks with the robotic leg. However, getting experimental data can be time-consuming, resource intensive, and dangerous. “We wouldn’t want a patient interacting with that robot,” Laschowski cautioned.  

This is where his research in physics-based computer simulation fits in. “It allows us to design and optimize our controllers very cheaply and reliably, all in simulation,” he said. 

These simulations capture physics and are increasingly used by large tech companies such as OpenAI, Nvidia, Google, and Meta. “We're doing something similar,” said Laschowski.  

Within this reinforcement learning framework, the research team can suggest that the optimal solution should be for the robotic leg to behave similar to a biological leg. This is known as biomimicry.  

In reality, the anthropometrics of a patient with an amputation differ from those of an able-bodied individual, Laschowski said. Since the physics (the system dynamics) are different, the optimal control solution may require a different policy and different biomechanics.  

"Using our simulations, we may one day discover a walking gait that exceeds that of healthy human performance," said Laschowski. "We haven’t gotten there yet. That’s a little bit more of a complicated problem. But right now, we assume, let's program the robot, or have the robot learn how to walk in a way that mimics human walking."

Sponsored Recommendations

How BASF turns data into savings

May 7, 2024
BASF continuously monitors the health of 63 substation assets — with Schneider’s Service Bureau and EcoStruxure™ Asset Advisor. ►Learn More: https://www.schn...

Agile design thinking: A key to operation-level digital transformation acceleration

May 7, 2024
Digital transformation, aided by agile design thinking, can reduce obstacles to change. Learn about 3 steps that can guide success.

Can new digital medium voltage circuit breakers help facilities reduce their carbon footprint?

May 7, 2024
Find out how facility managers can easily monitor energy usage to create a sustainable, decarbonized environment using digital MV circuit breakers.

The Digital Thread: End-to-End Data-Driven Manufacturing

May 1, 2024
Creating a Digital Thread by harnessing end-to-end manufacturing data is providing unprecedented opportunities to create efficiencies in the world of manufacturing.

Voice your opinion!

To join the conversation, and become an exclusive member of Machine Design, create an account today!