
Editor’s Comment: We’re currently in Hangzhou, China, to attend the Apsara Conference 2019, Alibaba Cloud’s largest annual conference. Formerly known as the “Computing Conference”, this event has been renamed to “Apsara” this year, after a legendary spirit of the clouds as well as Alibaba Cloud’s own massive-scale cloud computing OS.
This signature event of the cloud intelligence provider showcases the latest research, development and industry applications in the field of cloud computing. Over the years, the conference has become a global event in the sector, covering the latest trends in frontier technologies, including AI, data analytics and IoT, among others.
Click here to read more about the going-ons of the first day of the event on our sister site, DSA.
The highlight of the first day of the Apsara Conference 2019 is without a doubt the launch of Alibaba’s first AI chip, which will help the company achieve its vision of becoming a provider of cloud AND data intelligence.
The full press release follows:
Alibaba Group today unveiled its first AI inference chip developed by T-Head under the Alibaba DAMO Academy, an initiative to lead technology development and scientific research.
The high-performance AI inference chip, a neural processing unit (NPU) named Hanguang 800, that specialises in the acceleration of machine learning tasks, was announced at Alibaba Cloud’s annual flagship Apsara Computing Conference. It is currently being used internally within Alibaba’s business operations, especially in product search and automatic translation on e-commerce sites, personalised recommendations, advertising, and intelligent customer services. These areas require extensive computing power for the AI tasks to optimise the shopping experience.
“The launch of Hanguang 800 is an important step in our pursuit of next-generation technologies, boosting computing capabilities that will drive both our current and emerging businesses while improving energy-efficiency,” said Jeff Zhang, Alibaba Group CTO and President of Alibaba Cloud Intelligence. “In the near future, we plan to empower our clients by providing access through our cloud business to the advanced computing that is made possible by the chip, anytime and anywhere.”
A key goal for Alibaba Cloud is to offer a leading technology infrastructure that benefits companies of all sizes and narrows existing gaps in the access to technology, ultimately making the world more inclusive.
Propelled by a self-developed hardware framework, as well as highly-optimised algorithm designs that are tailored for business applications such as retail and logistics in the Alibaba ecosystem, Hanguang 800 has recorded remarkable performance in tests. The single-chip computing performance reached 78,563 IPS at peak moment, while the computation efficiency was 500 IPS/W during the Resnet-50 Inference test. Both performance scores largely outpace the industry average, showcasing advantages underscored by a remarkable balance between powerful computing capabilities and the highest level of computational efficiency.
For example, around one billion product images are uploaded to Taobao, Alibaba’s e-commerce site, every day by merchants. It used to take the machine one hour to categorise such a large volume of images, and then tailor search and personalised recommendations to be provided to hundreds of millions of consumers. But with Hanguang 800, it now only takes the machine 5 minutes to complete the same task.
Alibaba’s research unit, T-Head – whose Chinese name is “Pintouge,” meaning “honey badger” – leads the innovation around chip design for both cloud and edge computing. They are also responsible for nurturing an inclusive edge-to-cloud computing ecosystem by collaborating with global partners in the chip industry.
Earlier this year, T-Head debuted XuanTie 910, a high-performance IoT processor based on RISC-V, the open source instruction set architecture (ISA). XuanTie 910 was designed to serve the heavy-duty IoT applications which require high-performance computing, such as AI, networking, gateway, self-driving automobiles and edge servers.
Global developers have been able to successfully access certain code within the high-performance processor and leverage this technology to develop prototypes for their own chips.


Archive
- January 2021(44)
- December 2020(53)
- November 2020(59)
- October 2020(79)
- September 2020(72)
- August 2020(64)
- July 2020(71)
- June 2020(74)
- May 2020(50)
- April 2020(71)
- March 2020(71)
- February 2020(58)
- January 2020(62)
- December 2019(57)
- November 2019(64)
- October 2019(26)
- September 2019(24)
- August 2019(15)
- July 2019(24)
- June 2019(55)
- May 2019(82)
- April 2019(77)
- March 2019(71)
- February 2019(67)
- January 2019(77)
- December 2018(46)
- November 2018(48)
- October 2018(76)
- September 2018(55)
- August 2018(63)
- July 2018(74)
- June 2018(64)
- May 2018(65)
- April 2018(76)
- March 2018(82)
- February 2018(65)
- January 2018(80)
- December 2017(71)
- November 2017(72)
- October 2017(75)
- September 2017(65)
- August 2017(97)
- July 2017(111)
- June 2017(87)
- May 2017(105)
- April 2017(113)
- March 2017(108)
- February 2017(112)
- January 2017(109)
- December 2016(110)
- November 2016(121)
- October 2016(111)
- September 2016(123)
- August 2016(169)
- July 2016(142)
- June 2016(152)
- May 2016(118)
- April 2016(60)
- March 2016(86)
- February 2016(154)
- January 2016(3)
- December 2015(150)