fujitsu.jpg

Fujitsu Announces Better Memory Capacity for Deep Neural Learning Networks

Oct. 13, 2016
Fujitsu announces development of a GPU memory system that will enable more layers in a DNL network without compromising its speed.

Deep neural learning (DNL) technologies have become an advanced tool for computers to identify the content in images, decipher audio recordings, and analyze other complex inputs.  A DNL network consists of thousands of layers of nodes. Each node processes individual content from the input and generates a few interpretations that are sent to other nodes in a subsequent layer for further processing. This continues throughout the layers.

After an input has been processed through the network, the output is compared to a desired output and the computer generates an error reading. This error is fed back through the network, so that the each interpretation by a single node can be weighted. Based on the error, some interpretations are more heavily considered for the final output. There may be thousands of iterations for this process until the input is interpreted with a minimal error. This is how the machine learns.

Graphical processing units (GPUs) are generally used for DNL because of their memory capacity in parallel processing. The GPU must be able to remember weights and data associated with each error reading in every layer. When more layers are added, the processing speed of the GPU decreases because it needs to concentrate more of its power on memory. Conversely, central processing units (CPUs) are used more in serial processing, where data is interpreted one node at a time and processed through single strings of nodes. They can operate much faster, since they do not require as much memory as the node layers in a GPU.

With the introduction of a new memory system, Fujitsu announces development of a GPU that enables more layers in a DNL network without compromising its speed. Adding more layers will improve the overall accuracy and learning capacity of the GPU. At each layer, the GPU will compare the weights of nodal connections to a “weight error” calculated at the end of each iteration and will simultaneously compare the data stored at each layer to the “data error” calculated by the GPU. By subtracting the errors from the existing weights and data, the GPU can actually delete excess data and weights stored at each layer. This frees up more memory space so that they GPU can operate faster, storing only data that is necessary.

Courtesy of Fujitsu. Click to enlarge.

The new memory system is tested in the Caffe open source deep learning framework software. Evaluations used AlexNet and VGGNet, which is common in DNL research initiatives. Fujitsu reports that it reduced memory usage by 40% with the new system, nearly doubling the learning capacity and speed of the DNL network. The company plans to release the technology in March 2017 for use in its Human Centric AI Zinrai

Sponsored Recommendations

High Pressue, High Temperature Pump

April 29, 2024
This innovative axial piston design eliminates the use of elastomers, increases resistance to contamination, and dramatically improves reliability. They can generate up to 10,...

MOVI-C Unleashed: Your One-Stop Shop for Automation Tasks

April 17, 2024
Discover the versatility of SEW-EURODRIVE's MOVI-C modular automation system, designed to streamline motion control challenges across diverse applications.

A Comprehensive Guide for Automation Success

April 17, 2024
Gain insight into the benefits that SEW-EURODRIVE's streamlined automation processes offer to industries involved in machine automation and factory operations.

Navigating the World of Gearmotors and Electronic Drives

April 17, 2024
Selecting a gearmotor doesn’t have to be a traumatic experience. The key to success lies in asking a logical sequence of thoughtful questions.

Voice your opinion!

To join the conversation, and become an exclusive member of Machine Design, create an account today!