share article

Share on facebook
Share on twitter
Share on linkedin

New guidance set up to make AI- and ML-based automation technologies safer

News

A team of UK computer scientists from University of York has developed guidelines to make machine learning (ML) and artificial intelligence (AI) for autonomous technologies safe.

As robots, delivery drones, smart factories and driverless cars become pervasive in industry and our everyday lives, current safety regulations for autonomous technologies are seen as a grey area, lacking robust safety nets; global guidelines for autonomous systems are not as stringent compared to other high-risk technologies. Current standards often lack detail, with some new technologies based around AI and ML arriving to the market unsafe.

“The current approach to assuring safety in autonomous technologies is haphazard, with very little guidance or set standards in place. Sectors everywhere struggle to develop new guidelines fast enough to ensure that robotics and autonomous systems are safe for people to use. If the rush to market is the most important consideration when developing a new product, it will only be a matter of time before an unsafe piece of technology causes a serious accident,” said Dr Richard Hawkins, Senior Research Fellow and one of the authors of the new safety guide.

Developed by the Assuring Autonomy International Programme (AAIP) at the University of York, the new guidance is called “Assurance of Machine Learning for use in Autonomous Systems”, or AMLAS. The process systematically integrates safety assurance into the development of ML components.

AMLAS has already been used in several applications, including transport and healthcare.

Share this article

Share on facebook
Share on twitter
Share on linkedin

Related Posts

View Latest Magazine

Subscribe today

Member Login