fujitsu.jpg

Fujitsu Announces Better Memory Capacity for Deep Neural Learning Networks

Oct. 13, 2016
Fujitsu announces development of a GPU memory system that will enable more layers in a DNL network without compromising its speed.

Deep neural learning (DNL) technologies have become an advanced tool for computers to identify the content in images, decipher audio recordings, and analyze other complex inputs.  A DNL network consists of thousands of layers of nodes. Each node processes individual content from the input and generates a few interpretations that are sent to other nodes in a subsequent layer for further processing. This continues throughout the layers.

After an input has been processed through the network, the output is compared to a desired output and the computer generates an error reading. This error is fed back through the network, so that the each interpretation by a single node can be weighted. Based on the error, some interpretations are more heavily considered for the final output. There may be thousands of iterations for this process until the input is interpreted with a minimal error. This is how the machine learns.

Graphical processing units (GPUs) are generally used for DNL because of their memory capacity in parallel processing. The GPU must be able to remember weights and data associated with each error reading in every layer. When more layers are added, the processing speed of the GPU decreases because it needs to concentrate more of its power on memory. Conversely, central processing units (CPUs) are used more in serial processing, where data is interpreted one node at a time and processed through single strings of nodes. They can operate much faster, since they do not require as much memory as the node layers in a GPU.

With the introduction of a new memory system, Fujitsu announces development of a GPU that enables more layers in a DNL network without compromising its speed. Adding more layers will improve the overall accuracy and learning capacity of the GPU. At each layer, the GPU will compare the weights of nodal connections to a “weight error” calculated at the end of each iteration and will simultaneously compare the data stored at each layer to the “data error” calculated by the GPU. By subtracting the errors from the existing weights and data, the GPU can actually delete excess data and weights stored at each layer. This frees up more memory space so that they GPU can operate faster, storing only data that is necessary.

Courtesy of Fujitsu. Click to enlarge.

The new memory system is tested in the Caffe open source deep learning framework software. Evaluations used AlexNet and VGGNet, which is common in DNL research initiatives. Fujitsu reports that it reduced memory usage by 40% with the new system, nearly doubling the learning capacity and speed of the DNL network. The company plans to release the technology in March 2017 for use in its Human Centric AI Zinrai

Sponsored Recommendations

MOVI-C Unleashed: Your One-Stop Shop for Automation Tasks

April 17, 2024
Discover the versatility of SEW-EURODRIVE's MOVI-C modular automation system, designed to streamline motion control challenges across diverse applications.

The Power of Automation Made Easy

April 17, 2024
Automation Made Easy is more than a slogan; it signifies a shift towards smarter, more efficient operations where technology takes on the heavy lifting.

Lubricants: Unlocking Peak Performance in your Gearmotor

April 17, 2024
Understanding the role of lubricants, how to select them, and the importance of maintenance can significantly impact your gearmotor's performance and lifespan.

From concept to consumption: Optimizing success in food and beverage

April 9, 2024
Identifying opportunities and solutions for plant floor optimization has never been easier. Download our visual guide to quickly and efficiently pinpoint areas for operational...

Voice your opinion!

To join the conversation, and become an exclusive member of Machine Design, create an account today!