Covering Disruptive Technology Powering Business in The Digital Age

Home > Archives > News > Fujitsu Achieves World’s Most Accurate Recognition of Complex Actions and Behaviors with Deep Learning
image
Fujitsu Achieves World’s Most Accurate Recognition of Complex Actions and Behaviors with Deep Learning
image
January 14, 2021 News

 

Fujitsu Laboratories Ltd. announced the development of a technology that utilises deep learning to recognise adjacent joints’ positions and connections in complex movements or behaviour in which multiple joints move in tandem. This makes it possible to achieve greater accuracy in recognising, for instance, when a person performs a task like removing objects from a box. This technology successfully achieved the world’s highest accuracy against the world standard benchmark in the field of behaviour recognition, with significant gains over the results achieved using conventional technologies, which don’t make use of information on neighbouring joints.

Fujitsu aims to contribute to significant improvements in public safety and the workplace, helping to deliver on the promise of a safer and more secure society for all by leveraging this technology to perform checks of manufacturing procedures or unsafe behaviour in public spaces.

 

Background

In recent years, advances in AI technology have made it possible to recognise human behaviour from video images using deep learning. This technology offers various promising applications in a wide range of real-world scenarios, for example, in performing checks of manufacturing procedures in factories or detecting unsafe behaviour in public spaces. In general, human behaviour recognition utilising AI relies on temporal changes in each of the skeletal joints, including in the hands, elbows, and shoulders, as identifying features, which are then linked to simple movement patterns such as standing or sitting.

With time-series behaviour-recognition technology developed by Fujitsu Labs, Fujitsu has successfully realised highly-accurate image recognition using a deep learning model that can operate with high-accuracy even for complex behaviours in which multiple joints change in conjunction with each other, such as removing objects from a box during unpacking.

 

About the Newly Developed Technology

Complex movements like unpacking involve hand, elbow, and shoulder joints moving in tandem with the arm bending and stretching. Fujitsu has developed a new AI model for a graph convolutional neural networks that perform the graph structure’s convolution operation by adopting a graph consisting of edges connecting adjacent joints based on the structure of the human body with the joint position as a node (Vertex). By training this model in advance using the time series data of joints, the connection strength (Weight) with neighbouring joints can be optimised, and effective connection relationships for behaviour recognition can be acquired. With conventional technologies, it was necessary to grasp the individual characteristics of each joint accurately. With an AI model that has already been trained, the combined features of the adjacent joints that are linked can be extracted, making it possible to achieve highly-accurate recognition for complex movements.

This technology was evaluated against the world standard benchmark in the field of behaviour recognition using skeleton data, and in the case of simple behaviours such as standing and sitting in the open data set, the accuracy rate was maintained at the same level as that of conventional technology that does not use the information on neighbouring joints. In the case of complex behaviours like a person unpacking a box or throwing an object, the accuracy rate improved greatly, to achieve an overall improvement of more than 7% over the conventional alternative to reach the world’s highest recognition accuracy.

 

Future Plans

By adding the newly developed AI model for recognising complex behaviours obtained with this technology to the 100 basic behaviour already accommodated by Fujitsu’s behavioural analysis technology “Actlyzer,” it will become possible rapidly deploy new, highly-accurate recognition models. Fujitsu ultimately aims to leverage this new capability to roll out the system in the fiscal year 2021, and contribute to resolving real-world issues to deliver a safer and more secure society.

(0)(0)