A team of computer scientists from the University of Surrey have developed four innovative techniques that allow large-scale AI software to work on restrictive, serverless platforms without compromising the performance of the software. The methods reduce the size of the AI workloads, allowing them to be viable for serverless platforms, while storing key elements in the cloud, thereby freeing up much-needed space for further software development and reducing the associated costs.
The team also have developed methods to improve the robustness of a software’s simple and complex processes, which helps its predictive performance; they also explain a technique to help systems deal with chronological/timestamped data while operating in a restrictive platform.
The development of machine learning (AI) software built on serverless platforms is thought to be critical for future smart railway networks and autonomous vehicle travel, that’s why University of Surrey’s methods are of significance.
“We believe that these techniques developed at the University of Surrey will save creators valuable resources and open up a wide spectrum for innovative applications, giving them every opportunity to realise their vision for their AI based products,” said Dr Sotiris Moschoyiannis, Senior Lecturer in Complex Systems at the University of Surrey.
Serverless platforms have introduced a host of benefits to developers and companies who are working on real-time, cloud-based software; thanks to their automatic scalability they save time, cost and stress. However, these platforms – like Amazon Web Service’s Lambda – currently come with restrictions such as size limits (up to 250MB for Lambda) and RAM allocation restriction, as well as a time limit for the running code before it is abruptly interrupted. The lack of GPU support has also turned developers away from using serverless products for AI workloads.
“The coming 5G revolution will see an explosion of Internet of Things applications that will change many aspects of our day-to-day life. However, developers tasked with producing these game-changing applications are hamstrung by online services that are restrictive in ways that can hamper many projects,” said Dr Moschoyiannis.
The team put their findings to the test in the Real Time Flow project (funded by EIT Digital) by using their new techniques to build AI software that predicts a wide range of activity across the UK’s rail network – including how many people use a certain rail line or a certain train service, and even the traffic activity of level crossings. The team built their solution on Lambda, although it is compatible with any standard cloud provider. The Real Time Flow project also used the Flo.w platform by Emu Analytics and partners Ferrovial and Amey.