XMOS and Plumerai have partnered to support the development of binarised neural network (BNN) capabilities that enable AI to be embedded in a wide range of everyday devices efficiently at low power and cost.
The partnership will combine Plumerai’s Larq software library for training BNNs and the xcore.ai crossover processor from XMOS which provides native support for inference of BNNs. The combination of the two technologies will deliver a BNN capability that’s up to four times more efficient than existing edge AI solutions.
This solution will enable a new generation of devices to run tasks that make our lives simpler and safer. This could include everything from identifying that a shopping package has been delivered to a safe place to managing traffic flows more efficiently, supporting remote healthcare applications or keeping shelves in stores stocked more efficiently. While BNNs are an emerging technology, the future potential is enormous.
A typical deep-learning application uses models with tens of millions of parameters — and despite the move to 16-bit and 8-bit encoding there is still a great need for fast and efficient deep-learning and AI systems. That’s where BNNs come in.
BNNs are the most efficient form of deep learning, offering to transform the economics and efficiency of edge intelligence by going all the way down to just a single bit. However, there are significant challenges involved in making BNNs commercially viable — for example, they demand specific attention in chip design for efficient inference and new software algorithms for training.
“BNNs gained prominence in the news recently, with Apple’s purchase of Xnor.ai for a reported $200m. It’s little surprise that Apple is exploring AI capabilities at the edge, with advanced machine learning algorithms that can run efficiently in low-power, offline environments,” said Mark Lippett, XMOS CEO.