World News Intel

Among the patterns in our ongoing series of articles about edge computing: Edge matters where latency matters, and vice versa. And latency almost always matters when it comes to running artificial intelligence/machine learning (AI/ML) workloads.

As Brian Sathianathan, CTO of Iterate.ai, told us: “Good AI requires data. Great AI requires a lot of data, and it demands it immediately.”

That’s both the blessing and the curse in any sector – industrial and manufacturing are prominent examples, but the principle applies widely across businesses – that generates tons of machine data outside of their centralized clouds or data centers and wants to feed it to an ML model or other form of automation for any number of purposes.

Whether you’re working with IoT data on a factory floor, or medical diagnostic data in a healthcare facility – or one of many other scenarios where AI/ML use cases are rolling out – you probably can’t do so optimally if you’re trying to send everything (or close to it) on a round-trip from the edge to the cloud and back again. In fact, if you’re dealing with huge volumes of data, your trip might never get off the ground.

“I’ve seen situations in manufacturing facilities where there is ‘too much’ data to go from a robot on the floor, through the local network, and then all the way to the cloud and back,” Sathianathan told us. “That’s no good, because, as manufacturing CIOs know, decisions must be made instantly to be effective.”

[ Also read Edge computing: 4 pillars for CIOs and IT leaders. ]

Even if you don’t hit the “too much data” threshold, the value in AI/ML – and automation in general – derives in large part from speed. And that’s the first thing IT leaders should know about running AI/ML workloads in edge environments: Speed matters, and its counterpart – latency – can be a killer. Let’s unpack that and a few other realities about AI/ML at the edge.

1. Speed is of the essence

Let’s underline all of the above: The value of IoT data – or any other data in edge environments – is very often linked to the speed with which it can be processed, analyzed, and acted upon.

In most automation contexts, speed is measured in small fractions of a second.

“Taking data from a smart device up to the cloud to run a machine-learning model and then delivering the output of that model back to the smart device takes too long in use cases where milliseconds of additional latency matters,” Chris McDermott, VP of engineering at Wallaroo.ai, told us.

Whether your “edge” is a vehicle, a public utility, an assembly line, or myriad other environments where speed matters, the costs – both financial and otherwise – of data transit and latency will likely be too great to bear.

“In this case, the fastest place to run AI will be at the edge,” McDermott says.

2. Edge environments are the AI/ML use case right now

According to McDermott, the fastest-growing application of AI at the moment will be in the diverse range of settings that comprise the “edge” – whether in a factory that makes cars or the car itself once it’s on the road. Ditto appliances, power plants, and the vast list of other contexts that now all effectively double as IT environments.

For a sense of scale, McDermott notes that semiconductor-based electronics used to make up about five percent of the cost of a case. Today, it’s more than 40 percent of the price.

Whether on a factory floor or on an oil derrick, AI and other forms of automation increasingly require proximity – meaning, processing power and other IT infrastructure nearby.

Whether on a factory floor or on an oil derrick, AI and other forms of automation increasingly require proximity – meaning, processing power and other IT infrastructure nearby.

This pairs with another big-picture trend in edge computing: As the technology matures, the possibilities expand. That’s true from a hardware and architectural standpoint in manufacturing settings, according to Sathianathan from Iterate.ai.

“Advances in AI and edge servers with GPU-centric architectures are now becoming available and, for manufacturing CIOs, it’s a much better solution to start placing AI applications on the edge,” Sathianathan says.

It’s also true from a software and operational standpoint. Management is one of the key challenges in edge environments in general, and it’s not like you can send a help desk pro out to every edge node every time something goes wrong. Open source projects like MicroShift are helping extend critical platforms like Kubernetes out the edge for consistency.

3. It’s not an all-or-nothing choice

Like most other facets of edge computing strategy – just like with cloud computing strategy – this isn’t necessarily an “in” or “out” decision. That’s doubly true when it comes to running AI/ML workloads at the edge.

Red Hat technology advocate Gordon Haff wrote recently about the value of taking advantage of established patterns in edge architecture, and one of them is: You can do development centrally and inference locally.

“Portfolio architectures directly relevant to edge computing include industrial edge, which is applicable across several vertical industries, including manufacturing,” Haff says. “It shows the routing of sensor data for two purposes: model development in the core data center and live inference in the factory data centers.”

[ Related read: Edge computing: 5 use cases for manufacturing. ]

There’s a cost optimization driver here, too: In IoT environments (or any environment where machines are generating significant data), the volume of information is potentially massive. But that doesn’t mean it’s all needed.

“Scenarios where telemetry data is in such volume that transport and storage costs in the cloud make a difference, especially if 99 percent of the telemetry data isn’t used for any further purpose,” McDermott says. “In this case, enterprises prefer to run the models at the edge and only take back to the cloud the data that matters.”

Put another way: You probably already understand the hybrid cloud. Anticipate a similar familiarity with the hybrid edge.

4. Keeping AI/ML at the edge may be more secure

There are multiple scenarios where the security of an ML application (or other automation) may require keeping sensitive data on-site.

Edge security is itself a burgeoning focus for IT leaders implementing or increasing their edge footprints. Edge environments come with their own security implications, but the general truth here is: When data travels from environment to environment, its threat surface expands. Keeping sensitive edge data at the edge may be the strongest posture.

Security assessments will vary, there could also simply be practical considerations where connectivity to the public/open internet isn’t ideal – or impossible.

“In some environments connectivity to the open internet is a blocker so AI will have to run at the edge,” McDermott told us. “You can think of remote locations like oil derricks or gas pipelines or even in space where connectivity is unreliable, or for highly secure systems that can’t be connected to the open internet. In these scenarios, machine learning will have to run at the edge.”

[ Want best practices for AI workloads? Get the eBook: Top considerations for building a production-ready AI/ML environment. ]

enterprisersproject

Share.
Leave A Reply

Exit mobile version

Subscribe For Latest Updates

Sign up to best of business news, informed analysis and opinions on what matters to you.
Invalid email address
We promise not to spam you. You can unsubscribe at any time.
Thanks for subscribing!