Is the future of AI in the cloud or on the edge?
From Alexa in your living room to the cobots autonomously scuttling around Amazon warehouses, AI has gone mainstream.
But to really fulfil its role – including in the fourth industrial revolution – it needs to work smarter and learn quicker. The answer, say many, is taking AI to the edge.
To mark the launch of our latest publication, M&A Market Report 1h 2019: Artificial Intelligence, we ask if AI is about to become post-cloud, or if the edge is just the middleman.
“As the application of AI grows increasingly widespread across almost all verticals, several industries are now dependent on the advancements of AI to ensure their growth and competitiveness. […] Fundamental to all technology developed under the AI banner are infrastructure technologies such as open source frameworks and edge AI, which become springboards for the development of the sub-sector technologies they support.”
The cloud versus the edge in AI
In recent years, cloud computing has become the norm, but there are limitations to sending massive datasets on an all-expenses paid round-trip to remote data centres to be processed.
The answer could be edge computing, which moves processing tasks from the cloud or remote data centres to algorithms based on devices sited at the edge of the local network.
An obvious benefit of this approach in AI is the reduction in latency. Cloud-based applications can receive, process and return data in the blink of an eye, but are still not quick enough for AI functions where milli-seconds count.
The usefulness, not to mention safety, of machines like driverless cars and robot surgeons, for example, relies on being able to collect, analyse and act upon information in real-time. There’s no room for latency.
Data intensity can be another issue, especially when dealing with data-rich functions such as medtech algorithms designed to process and learn from millions of genetic markers. Quite simply, sending all that information to a remote data centre is expensive.
Another limitation of the cloud is its need for high-quality, reliable connectivity, which rules out the use of AI applications in remote places like areas of developing countries or even country farms.
Edge devices are designed to carry out their processing tasks locally, only communicating with the wider network when necessary. This also has the bonus of extending battery life – another advantage for functions such as intelligent crop management in rural areas or health monitoring in remote villages without mainstream services.
And of course, there’s data security. The more information kept on the cloud, the higher the risk of a data breach.
Autonomous vehicles, IoT, healthtech, commerce, insurance and smart cities all need accurate, real-time data in order to ensure their growth and competitiveness.
Experts believe the key to success is the edge’s ability to facilitate millions of specialist tasks, rather than the cloud’s one-size-fits-all approach.
That’s not to say AI is post-cloud – rather, these two methods of data processing can work together.
Individual, bespoke edge devices can learn from and adapt to their own environment, making each algorithm an expert in its own field and sharing that information when necessary.
Machine learning models can be trained at data centres before becoming largely autonomous – processing and local learning are carried out on edge devices, which notify the cloud or remote data centres when there’s a problem or new occurrence.
This solves all the limitations of the cloud in AI – reducing latency, power and bandwidth consumption while improving privacy – but keeps all the benefits of central data storage.
What the markets say
It’s a model the markets appear to be getting behind wholeheartedly, and the edge AI software market is expected to grow from $356 million last year to $1.15 billion by 2023.
As highlighted in our AI M&A market report, edge computing is spawning its own ecosystem of successful start-ups.
In July 2018, for example, Swim.ai raised $10 million in a Series B round which included strategic investor and chip design firm, arm. It brought the company’s total funding to around $18 million.
Swim.ai’s product allows connected edge devices to share insights and train each other, using local data processing and analytics. Demand for the software is driven by its unique peer-to-peer ability, that allows edge devices to “talk” to each other locally on existing equipment.
According to Rusty Coumpton, Swim.ai’s co-founder and CEO, “efficiently processing edge data and enabling insights to be easily created and delivered with the lowest latency are critical needs for any organisation.”
Joining the dots
Despite all the excitement around the edge, it is not here to eat the cloud, but rather act as a conduit between applications and data centres.
This partnership will allow AI to become quicker, more efficient and expert in all manner of subjects. It will allow algorithms to learn from and respond to their own environment, make local decisions based on local data and feed back into the cloud when needed.
Local processing will develop and deliver methods of building cleaner, smarter, more efficient ways of working, tailored to each sector, and backed up remotely.
In other words, while the cloud is the overarching brain containing all the data industry 4.0 needs to prosper, the edge is the hive mind that feeds it.