Skip navigation

Could Machine Learning Seal the Deal for Lifelike Bionic Limbs?

When it comes to fine motor control and natural movements if nerve-controlled prostheses, why not let machine learning do the thinking?

Research over the past ten years has already led to advances in voluntary nerve control over bionic limbs. In 2014, Les Baugh became the first bilateral amputee to gain control of two neuro-prosthetic arms. He underwent nerve-pathway reassignment surgery to redirect signals from his brain's primary motor cortex to a location in his upper back. It took him hours with simulation software to learn how to control the arms before he put them on.

It’s breakthroughs like this that make people wonder what’s next for the technology. Baugh could train his primary motor cortex to control the movement of two arms, but controlling a bionic hand would be harder. In addition it would require complex decoding algorithms to read neuronal signals as intentions to move the prosthesis.

For this reason, many researchers turn to machine learning as a viable solution for fine motor control in bionic hands. Scientists at Newcastle University present their convoluted neural network (CNN), which uses machine vision to classify objects by their grip type. The team tested their CNN’s ability to relay that grip type to a thought-controlled prosthetic limb.

The team trained their CNN using 500 virtual objects from the free Amsterdam library of object images (ALOI). ALOI uses 80 pictures taken at 5-degree angles to represent objects; these pictures are processed as renderings that can be classified by grip type.

Memory to process 3D images is stored in the individual nodes of the CNN layers. When presented with completely different objects in real life, the CNN could classify grip type with an accuracy of 75%.They used their results to generate the free Newcastle Library of Grip Types

The team tested their CNN’s ability to integrate with a myoelectric prosthesis, and its depth perception with a stereo-lens camera. The prosthesis was tested by trans-radial amputees in a clinical setting using objects of 4 different grip classes. Amputees could initiate a small wrist movement through voluntary nerve commands to activate the stereo-lens camera. After the CNN classified the grip from the snapshot, the hand could move into the necessary position and pick up objects.

With more research, deep neural networks could be used to improve the natural movements of prostheses. Paired with other tools beside machine vision, machine learning could be used to augment voluntary control in movements like walking and jumping.

Read the full methods and results of this experiment in the Journal of Neural Engineering.

Read more about the bionic arms used by Les Baugh with engineering resources provided by Johns Hopkins Applied Physics Laboratory.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.