The Embedded Vision Alliance will now include the full range of edge AI technology and applications, including computer vision and visual AI. To better reflect its new scope, the Alliance will now be known as the “Edge AI and Vision Alliance”.
“Edge AI” means AI processing that occurs locally, whether on chip, device, or site. The concepts engulfs both hybrid approaches where some processing happens locally and some in the cloud, and includes edge devices that process different type of sensor data, from images and audio, to vibration, radar, lidar, and so on. Examples of edge AI systems include a warehouse robot using cameras and lidar, a smart speaker with local wakeword processing, an on-premise video recorder with object detection and tracking, and a radar-based hospital patient monitor that uses AI to detect breathing, movement and sleep, among others.
“We are seeing the same challenges in edge AI that we saw in computer vision almost a decade ago. Then, as now, a powerful new technology had become ready for widespread use, but the companies and developers creating systems and applications were struggling to figure out how to best incorporate it into their products. And then, as now, technology suppliers needed data and insights to help them find their best opportunities, as well as connections to customers and partners to enable them to grow their businesses,” said Jeff Bier, Founder of the Embedded Vision Alliance, now the Edge AI and Vision Alliance.
“Our fundamental purpose remains the same: inspiring and empowering the individuals and companies creating intelligent systems and applications; building a vibrant ecosystem by bringing together technology suppliers, end-product creators, and partners; and delivering timely insights into relevant market research, technology trends, standards and application requirements. The difference is that our mission now includes not just vision but the full range of edge AI technologies and applications,” he added.