Fast-Track Real-Time Kinematics Solution Development

Using GPS as a navigation aid while driving is useful and frustrating in equal measure. While having live directions is a good thing, there are frequent periods when you have to wait for the satellite navigation to catch up with where you actually are on the road.

Such discrepancies in location mapping are mildly annoying at worst, but they are unforgivable when it comes to applications that need more precise positioning information—like autonomous driving. And it’s why real-time kinematics (RTK) is gaining in popularity in applications that need precise positioning.

RTK involves sending “correction data” to a moving receiver, thereby increasing the positioning accuracy that conventional global navigation satellite systems (GNSS) provide. While traditional GNSS receivers receive position data about once every second, RTK is 200 times more frequent. The net result is positioning that is accurate to within one or two centimeters even in fast-moving vehicles.

RAB4 decreases the barriers to the #technology, speeds up the pre-engineering phase and time to market. Rutronik Elektronische Bauelemente GmbH via @insightdottech

RTK “Sandbox” Provides Isolated Testing

And while RTK has been on the market for some time now, only lately has it become economical for wider access, says Stephan Menze, Head of Global Innovation Management at Rutronik Elektronische Bauelemente GmbH, an electronics components and technology solutions provider.

Taking a technology for a test drive sometimes involves going through a lot of detours. Companies often find that they need specific infrastructure and a whole range of different hardware components simply to find out if the solution is even worth pursuing. An isolated testing environment eliminates these roadblocks and delivers faster answers. And it’s why Rutronik developed the Rutronik Adapter Board RAB4, specially designed as a sandbox for RTK development projects. RAB4 decreases barriers to the technology and speeds up the pre-engineering phase and time to market.

RAB4 is a product of Rutronik System Solutions, which launched in 2021 with the goal of creating tools to drive sales and for customers to test specific markets. In the case of RTK, for example, these additional markets might operate drones or lawn mowers or even autonomous driving, all of which can benefit from precision positioning. 

RAB4 Adapter Board Components

RTK needs GPS data and a base station that sends corrections. Wi-Fi or Bluetooth connections can work for local base stations. But larger-scale and more-extensive projects such as implementations in smart cities or agriculture likely will need LTE wireless.

The RAB4 Adapter Board has all the necessary elements to test RTK technology: a high-precision RTK positioning module from Unicore; a 4G LTE module for connectivity; necessary antennas; and a SIM card preloaded with 100MB of data for companies to download results and compare data received from the GNSS receiver against those from the RTK receiver. (Video 1)

Video 1. RAB4 has everything needed for solution development in an isolated testing environment. (Source: Rutronik Systems Solutions)

If a Bluetooth connection is preferred, RAB4 can use an Arduino interface that combines with the Base Board RDK3 that allows such a connection. RAB4 can also link with the Text to Speech Adapter Board, which relays voice outputs on battery status, connection problems, and other information in up to 12 languages. Rutronik also includes software to deliver complete proof-of-concept packages. “We try to show the customer how a system can work, and the customer in the end is thankful for easy access to hardware and software,” Menze says.

Demonstrating Real-Time Kinematics Through a Rover App

To make access to RTK even easier, Rutronik developed a “Rover” and a related app, which the company showcased in the Rutronik booth at embedded world 2024, where visitors could control it themselves. The rover is easy to operate via an app and can be controlled with centimeter precision.

Using an Arduino interface, RAB4 is combined with RDK3, a base board from Rutronik System Solutions, which allows a wireless connection via low-energy Bluetooth. The reference station sends the measured GNSS position to the rover in a real-time protocol via Bluetooth. As a result, the rover knows the distance to the reference station and can navigate with centimeter precision using its relative position to the base station—eliminating the need to lay a wire in the ground as a boundary.

Future iterations of RAB4 are being planned, including models that will use the Intel RealSense camera for collision detection on the rover and other applications. As applications scale into the real world, RTK technology will need high processing power, for which Rutronik will also use Intel, Menze says. The sandbox system currently uses the Infineon microcontroller but plans to use higher-performance Intel processors in future iterations of the RTK and other proof-of-concept solutions. A new base board with an Intel processor is in the development phase.

As for RTK itself, expect more implementations of the technology in the future as smart cities become more common. In such cities, traffic lights can receive data and regulate traffic but to do so safely will need the kind of precision positioning that RTK delivers. Autonomous driving is an exciting use case for RTK, even if it might take a few years to implement, Menze says. Last-mile delivery using automated guided vehicles (AGVs) and drones is an equally promising avenue.

Whatever markets might be imagined for RTK technology, the Rutronik solution can provide the necessary components to evaluate fit before the robot takes to the road.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

AI Workloads Scale with Next-Gen Processors and COM Express

X-rays, ultrasounds, and endoscopy machines generate massive volumes of data—sometimes too much to make sense of. In response, medical device OEMs integrate AI directly into medical imaging and diagnostics machines to make screening procedures more efficient, effective, and accessible for clinicians and patients alike.

Supporting AI-enabled medical imaging and diagnostics requires high-end hardware with the graphics and compute performance to execute intelligent imaging workloads in real time. Until recently, the easiest way to enable these capabilities was through discrete accelerators—an approach that can be expensive and inefficient in terms of upfront hardware costs and power consumption.

But by far the most costly design decision is the wrong system architecture. AI is evolving rapidly, so without flexible, adaptable, and upgradable system hardware, equipment can become obsolete before it is adequately broken in.

“AI workloads are advancing so quickly, it’s sort of dangerous when you start talking about hardware at all,” says Zeljko Loncaric, Market Segment Manager for Infrastructure at congatec AG, a global leader in embedded solutions. “That’s one of the most significant challenges facing medical device designers. They also face hurdles in implementing newer functionality in long-lifecycle systems.”

COM Express modules based on Intel® Core Ultra mobile processors address these challenges. They offer superior performance and efficiency in AI workload processing thanks to integrated GPUs and NPUs. And their inherent modularity streamlines the initial design process while enabling easy upgrades, processor generation over processor generation.

#AI #technology represents a meaningful advancement for #medical imaging, with the potential to significantly improve diagnostic efficiency and accuracy. @congatecAG via @insightdottech

Balancing Edge AI Longevity and Innovation in Embedded Computing

Because medical imaging devices must undergo a comprehensive certification process before they can be used, their lifecycles tend to average a decade or more. Meanwhile, AI technology represents a meaningful advancement for medical imaging, with the potential to significantly improve diagnostic efficiency and accuracy in ultrasounds, mobile ultrasounds, endoscopy machines, X-rays, and more.

But faced with the time and expense of redesigning and recertifying a medical device, OEMs hesitate to transition to next-generation platforms that support AI without an extremely compelling business case. And without being able to answer how long a system design will remain relevant, that business case becomes less compelling.

Enter new Intel Core Ultra Mobile processors, the first x86 processors to integrate an NPU, and one of the most power-efficient SoC families on the market today. The integrated NPU enables support for advanced AI workloads without the added cost and complexity of a discrete accelerator. When paired with the SoC’s leading performance-per-watt, medical device designers can better manage power consumption and thermal efficiency in resource-constrained edge AI deployments.

“The processor’s per-watt performance is also highly interesting in the context of mobile ultrasound devices and other battery-powered systems,” notes Maximilian Gerstl, Product Line Manager at congatec. “What Intel did with the architecture is very impressive. The numbers look great in terms of performance—not only on the CPU side, but also in terms of graphics. The new processors also offer an unprecedented level of flexibility to customers, allowing them to upgrade their systems across multiple generations while retaining the same form factor.”

“If there’s not a great new technology coming up, organizations will stay on the same module for 10 years or more so that they don’t have to recertify,” he continues. “Intel Core Ultra Mobile processors are a big step up. Healthcare organizations will have to think about changing to it.”

Open-Standard Modules Fast-Track System Upgrades

The latest congatec conga-TC700 COM Express Compact module incorporates the processing performance and application-ready AI capabilities of Intel Core Ultra Mobile processors in a plug-and-play form factor. Medical device designers can leverage the module as a shortcut to building efficient edge AI systems while significantly improving time-to-market and reducing total cost of ownership (TCO). And since COM Express is an open hardware standard governed by the global technology consortium, PICMG, the TC700 provides a vendor-neutral path to system upgrade whereby a legacy module can simply be swapped out for a higher-performance one with the same interfaces.

The conga-TC700 COM Express Compact module—alongside a host of other congatec products—will be featured in the company’s booth (3-241) at embedded world 2024 in Nuremberg, Germany, from April 9-11.

“The ability to quickly swap hardware means an organization can have its applications running for a very long time,” Gerstl explains. “Though they have to recertify new hardware components, they can bring over a lot of their software and hardware designs from previous applications.”

Intelligent Healthcare, Enabled by Edge AI Solutions

The conga-TC700 is supported by congatec’s OEM solution-focused ecosystem, which features efficient active and passive thermal management solutions, long-term support, and ready-to-use evaluation carrier boards. The company is also exploring how the open-source Intel® OpenVINO toolkit can empower its customers in the development and deployment of AI vision systems. According to Gerstl, the company is working on early benchmarking with specific use cases to help customers get their applications up and running more quickly.

For congatec, the availability of Intel Core Ultra Mobile processors represents a considerable step forward in the price, performance, and power consumption of next-generation edge AI devices. For medical device OEMs, these processors provide a compelling path to new, AI-enabled imaging and diagnostics equipment.

“We will continue to enable AI acceleration, hardware, and software and bring it to our products,” Gerstl says. “We want to enable this new trend.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

Using AI for Workplace Safety by Leveraging Real-Time Video

Workplace safety is a real problem across industries. According to a recent report, a whopping 374 million people suffer from workplace accidents each year. It’s a scary number, but luckily businesses can act to reduce risks.

For example, many companies already have closed-circuit television camera (CCTV) systems in their infrastructure for routine security and asset management. These very same units can be used for additional purposes, such as identifying potential hazards to help decrease the number of workplace injuries.

Such implementation of video analytics technology runs on the premise that a chance occurrence or two will likely form an incident, but frequent repetition can indicate a wider, worrisome pattern. And when it comes to employee health and safety, these negative patterns can be costly but preventable by studying and acting on video camera footage.

Safety Analytics in Asset-Heavy Industries

When AllGoVision, a video analytics software solution provider, first started out in the CCTV field, it focused on security and asset management but quickly realized the technology’s potential beyond security. According to Aji Anirudhan, Chief Sales and Marketing Officer for AllGoVision, its use in protecting employee health in all types of industries became quite clear.

Asset-heavy industries such as manufacturing and energy and utilities can especially benefit from video analytics to improve employee working conditions. In such industries, worker safety can be compromised in two fundamental ways: first, when the situation on the ground changes rapidly. This is when accidents such as burns happen because of inappropriate handling of hot metals or exposure to toxic fumes from oil or gas leaks. In addition, workers can compromise their own health by not following safety protocols closely. Not wearing the right personal protective equipment (PPE) in high-risk working conditions, for example, can increase the risk of injury.

Asset-heavy industries such as #manufacturing and #energy and utilities can especially benefit from video #analytics to improve employee working conditions. @AllGoVision via @insightdottech

Traditional approaches to worker safety have been fairly passive, Anirudhan explains. Employee health safety teams conduct risk assessments of various plausible harmful scenarios and develop training programs accordingly. “And whenever an incident happens, they investigate it and try to update their policy so they will minimize these accidents,” he says. While company-wide policies are making a dent in the number of workplace injuries, the problem is still significant. Which is why “any solution which provides insights to understand patterns of unsafe behavior can help prevent accidents,” Anirudhan adds.

AllGoVision works hard to be that solution. Especially in energy and utilities companies, which work with oil and gas, safety problems abound through noncompliance. Whether it’s related to drilling equipment on oil fields or transport and storage of fuel, there’s a strong potential for mistakes at every step of the way, according to Anirudhan.

Mechanisms for AI Workplace Safety

To avoid compromising on worker safety, AllGoVision leverages AI to analyze the on-ground situation in real time. CCTV systems livestream video data, which AI safety analytics software can evaluate. It catches violations or problem areas in real time and can alert frontline managers who can proactively attend to these challenges. It provides dashboards that can help understand the current state of safety protocol adherence and monitor progress according to improvement plans. Employees also want to be part of the solution, and such analytics facilitate discussions of workplace data without a “Big Brother” approach, Anirudhan says.

AllGoVision pays special attention to data privacy protocols, preserving only metadata and not saving individual worker footage. The company also works to make sure its models are bias-free, and employs consultants to ensure compliance with international data privacy regulations.

Experienced Safety Analytics Vendor

AllGoVision particularly shines because the company harnesses its extensive experience in video analytics to evaluate specific client situations and recommend comprehensive solutions that will deliver desired outcomes. The software’s plug-and-play format means it can integrate with existing infrastructure, adding a data layer to derive information from. The company works with systems integrators to integrate solutions into a larger video management package that they might deliver to clients.

Experience executing workplace safety protocols in a number of industries matters. “Because AI is democratized, pretty much anybody can access algorithms available on open-source and create video analytics. But extrapolating that to a production environment where the cameras are different, the lighting is different, and the expectations are different, is the most challenging part,” Anirudhan says. “That’s where our strength in actually being able to address safety and employee productivity in different verticals especially oil and gas, utilities, and manufacturing, especially come into play.”

Over the years AllGoVision has evolved with different generations of Intel® platforms, Anirudhan says. The company was one of the early adopters of OpenVINO as well, and uses Intel® Xeon® processors. “The cost of running an algorithm has significantly come down and that is a saving for our customers. We see that as a huge advantage of working with Intel,” Anirudhan says.

Evolution of AI Workplace Safety

Anirudhan is excited about the many use cases for AllGoVision AI for workplace safety, including in buildings and infrastructure to detect fire and smoke, or in crowd control. The application of AI for workplace safety is still in the nascent stage, providing a huge opportunity for AI-driven solutions that make a significant impact.

Gone are the days of rinse-and-repeat policy-driven implementations. “Customers are seeing a clear value in moving to a real-time proactive approach,” Anirudhan says. Expect monitoring of more parameters, including worker fatigue, in the future. “There will be more wearables, sensors, IoT devices added to the workplace, which will all add to different use cases of machine and people management,” he adds.

In addition, Anirudhan sees enormous potential in generative AI to address more complex use cases, especially those that involve human-machine interaction.

AllGoVision is working hard to address worrisome workplace safety statistics. “We can make a big social impact if we could have AI actually address some of these challenges,” Anirudhan says.

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

Harmonizing Innovation with Audio-Based Generative AI

Artificial intelligence is an umbrella term for many different technologies. Generative AI is one we hear a lot about—particularly ChatGPT. And ChatGPT gets a whole lot of press, but it’s not at all the only song in the generative AI playbook. And one tune that Ria Cheruvu, AI Software Architect and Generative AI Evangelist at Intel, has been excited about lately is generative AI for the audio space (Video 1). 

Video 1. Ria Cheruvu, Generative AI Evangelist for Intel, explores the business and development opportunities for audio-based generative AI. (Source: insight.tech)

But generative AI of any kind can be intimidating, and developers don’t always know exactly where to start or, once they get going, how to optimize their models. Partnering with Intel can really simplify the process. For example, beginning developers can leverage the Intel® OpenVINO notebooks to take advantage of tutorials and sample codes that will help them get started playing around with GenAI. And then, when they’re ready to take it to the next level or ready to scale, Intel will be right there with them.

Ria Cheruvu talks with us about the OpenVINO notebook repository, as well as the real-world applications suggested by generative AI for audio, and the differences between the aspect of it that works for call centers and the aspect that can actually work for musicians.

What are the different areas of generative AI?

This space is definitely developing in terms of the types of generative AI out there. ChatGPT is not the only example of it! Text generation is a very important form of generative AI, of course, but there is also image generation, for example, using models like Stable Diffusion to produce art and prototypes and different types of images. And there’s also the audio domain, where you can start to make music, or make audio for synthetic avatars, as well as many other types of use cases.

In the audio domain, a fast runtime is especially important, and that’s one of the common pain points. You want models that are super powerful and able to generate outputs with high quality really quickly, and that takes up a lot of compute. So I’d say that the tech stack around optimizing generative AI models is definitely crucial, and it’s something I investigate as part of my day-to-day role at Intel.

What are the specific business opportunities around generative AI for audio?

It’s really interesting to think about using voice AI or conversational AI for reading in and processing audio, which is what you do with a voice agent, like a voice assistant on your phone. Compare that to generative AI for audio, where you’re actually creating the content—being able to generate synthetic avatars or voices to call and talk to, for example. And definitely the first business applications you think about are call centers, or metaverse applications where there are simulated environments that use this created audio.

But there are also some nontraditional business uses cases in the creative domain, in content creation, and that’s where we start to see some of the applications related to generative AI for music. And to me this is incredibly exciting. Intel is starting to look at how generative AI can complement artists’ workflows: for example, in creating a composition and using generative AI to sample beats. There’s also a very interesting cultural element to how musicians and music producers can leverage generative AI as part of their content-creation workflows.

And so while it’s not a traditional business use case—like what you would see in call centers, or in interactive kiosks that use audio for retail—I do believe that generative AI for music has some great applications for content creation. Eventually it could also come into other types of domains where there is a need to generate sound bites, for example, creating synthetic data for AI system training.

“#GenerativeAI for music has some great applications for content creation. Eventually it could also come into other types of domains where there is a need to generate sound bites” – Ria Cheruvu, @intel via @insightdottech

What is the development process for generative AI for audio?

There are a couple of different ways that the generative AI domain is currently approaching this. One of them is definitely adapting the model architectures that are already out there for other types of generative AI models. For example, Riffusion is based on the architecture for Stable Diffusion, the image-generation model; it just generates waveforms instead of images.

I was speaking recently to someone who is doing research in the music domain, and one of the things we talked about was the diversity of input data that you can give these audio-domain models. It could be notes—maybe as part of a piano composition—all the way to just waveforms or specific types of input that are specialized for use cases like MIDI formats. There’s a lot of diversity there.

What technologies are required to train and deploy these models?

We’ve been investigating a lot of interesting generative AI workloads as part of the Intel OpenVINO toolkit and the OpenVINO Notebooks repository. We are incorporating a lot of key examples of audio generation as very useful use cases to prompt and test generative AI capabilities. We had a really fun time partnering with other teams across Intel to create Taylor Swift-type pop beats using the Riffusion model—all the way to more advanced models that generate audio to match something that someone is speaking.

And one of the things that I see with OpenVINO is being able to optimize all these models, especially when it comes to memory and model size, but also enabling flexibility between the edge and the cloud and the client.

OpenVINO really targets that optimization part. There’s a fundamental notion that generative AI models are big in terms of their size and their memory footprint; and the foundations for all of these models—be it audio, image, or text generation—certain elements of them just are very large. By halving the model footprint using compression and quantization-related techniques, we’re able to achieve a lot of reduction of the model size while still ensuring that performance is very similar.

And all of this is motivated by a very interesting notion of local development. Music creators or audio creators are looking to move toward their PCs when creating content—as well as being able to work on the cloud in terms of intensive work like gathering audio data, recording it, annotating it, and collaborating with different experts to create a data set. And then they would be able to do other workloads on a PC and say, “Okay, now let me generate some interesting pop beats locally on my system and then prototype that in a room.”

What are some examples of how developers can get started with generative AI?

One example that I really love to talk about is how exactly you take some of these OpenVINO tutorials and workloads that we’re showing in the notebooks repo and then turn them into reality. At Intel we partner with Audacity, a tool that essentially enables open-source audio-related editing creation. It’s really a one-stop, Photoshop kind of a tool for audio editing. And one of the things we’ve done is integrate OpenVINO with it through a plugin that we provide. Our engineering team took the code in the OpenVINO Notebooks repo from Python, converted it to C++, and then deployed it as part of Audacity.

It allows for more of that performance and memory improvement I mentioned before, but it’s also integrated directly into the same workflow that many different people who are editing and just playing around with audio are leveraging. You just highlight a sound bite and say “Generate,” and OpenVINO will generate the rest of it.

That’s an example of workflow integration that can be used for artist workflows; or to create synthetic audio for voice production for the movie industry; or for interactive kiosks in the retail industry; or for patient-practitioner conversations in healthcare. That seamless integration into workflows is the next step that Intel is very excited to drive and to help collaborate on.

What else is in store for generative AI—especially generative AI for audio?

When it comes to generative AI for audio, I think it’s “blink and you may miss it” for any particular moment in this space. It’s just amazing to see how many workloads have been added. But just looking into the near future—maybe end of year or next year—some of the developments I can start to see popping up are definitely around those workflows I mentioned before, and identifying where exactly you want to run them—is it on your local system, or is it on the cloud, or on some sort of mix of the two? That is definitely something that really interests me.

We are trying some things around audio generation on the AI PC with the Intel® Core Ultra and similar types of platforms, where—when you’re sitting in a room prototyping with a bunch of fellow musicians and just playing around—ideally you won’t have to access the cloud for that. Instead, you’ll be able to do it locally, export it to the cloud, and just move your workloads back and forth. And key to this is asking how we incorporate our stakeholders as part of that process—how do we exactly create generative AI solutions, instantiate them, and then maintain them over time?

Can you leave us with a final bit of generative AI evangelism?

Generative AI is kind of a flashy space right now, but almost everyone sees the value that can be extracted out of it if there is a future-proof strategy. The Intel value prop for the industry is really being able to hold the hands of developers, to show them what they can do with the technology, and also to help them every step of the way to achieve what they want.

Generative AI for audio—generative AI in general—is just moving so fast. So keep an eye on the workloads, evaluating, testing, and prototyping; they are definitely all key as we move forward into this new era of audio generation, synthetic generation, and so many more of these exciting domains.

Related Content

To learn more about generative AI, read Generative AI Solutions: From Hype to Reality and listen to Generative AI Composes New Opportunities in Audio Creation. For the latest innovations from Intel, follow them on X at @IntelAI and on LinkedIn.

 

This article was edited by Erin Noble, copy editor.

Real-Time Data Analytics Drive More-Efficient Operations

Managing assets is not easy. When you oversee a fleet of vehicles, for example, you need contextually accurate and detailed information. Simply measuring the maximum and minimum temperatures of the coolant in a vehicle whenever it is refueled is hardly enough to know what is going to break down and when. And an engine light warning is too generic—it could mean something serious like the radiator is about to give way or signal something harmless.

The Case for Real-Time Data Analytics

Asset-intensive industries are saddled with a range of data-related problems. Too often, asset managers act on information that is irrelevant or outdated. Even within an organization, the data the frontline worker needs to see and act on need not be the same as what’s necessary for plant managers to do their job. And the mechanisms to signal a problem, such as text alerts, might be in place, but false positives make workers complacent. Too much of the wrong and irrelevant information can lead to data fatigue. Finally, even when data is at hand, you might not have enough of it to make an informed decision.

These data problems are pervasive across industries, says Sahid Sesay, President of SmartConnect IoT, a provider of sensor data management solutions. It’s why asset managers and other frontline personnel need to have more relevant data in real time so they can make accurate data-driven decisions and be proactive rather than reactive.

The company’s no-code SC-IoTOS Sensor Edge Gateway Software—an IoT sensor data capture translation aggregator—connects any Intel® processor-based hardware to virtually any type of equipment, sensor, or camera at the edge. It securely collects, stores, normalizes, and streams captured data to and from anywhere and makes it available for analysis and further processing. Such a solution delivers the right kind of data at the right time and to the right person.

“When you increase the amount of quality streaming data, it puts decision-making and information at people’s fingertips across the organization,” Sesay says. And in the age of digital and industrial transformation, such access to meaningful and actionable insights is exactly what organizations need.

Ease of Operational Data Management Helps Systems Integrators

Integration of data sources within existing technology stacks and infrastructure is another significant hurdle that companies often must cross before they can access insights. The SmartConnect solution makes use of legacy intelligence and asset management systems adding sensors as needed for desired metrics. By integrating data from both legacy and new sensors into one no-code information layer, the IoT solution eliminates the barrier to insights—a struggle for many companies.

There’s no custom work involved in layering the SmartConnect IoT solution onto existing data harvesting mechanisms, which lowers its price and makes the product popular with systems integrators. “Systems integrators can respond quicker to requests for proposals and remove risk from their operations. When they want to grow, they can grow without additional overhead. The solutions scale inherently. Going from PoC or pilot to production in a single step is normal. Plus, they can expand or adapt geographically and logically as needed,” Sesay says.

Data Integration Use Case

Using the SmartConnect solution helps a North Carolina-based food processing company find problems in its workflows and manage its operations better.

Sensors were monitoring a variety of parameters for asset management of refrigeration units and conveyor belts—from vibration to temperature and pressure and the health of motors. With multi-vendor sensors that monitor and control equipment and production processes, there was no unified streaming data that reflects the actual health of the assets.

To solve this data gap problem, the company deployed an Intel-powered edge compute server on the factory floor, running SmartConnect SC-IoTOS software—integrating and processing on-prem data in the cloud to deliver real-time data analytics. The deployment enables a sustainable approach for all relevant stakeholders to access the information they need to keep operations running.

Before the processing plant started using the SC-IoTOS server to ensure a steady stream of data, asset health data flowed in only every few weeks, leaving wide gaps in the performance evaluation of critical and disparate assets. Even mistakes that seemed minor, like leaving the refrigerator door open, would lead to health code violations and expensive repercussions. But now real-time data analytics in context, with alerts sent out to the right people, improves productivity and performance—helping the company manage assets proactively, reduced downtimes, and realizing more value for the assets.

Asset-heavy industries like cement, steel, mining, pulp and paper #manufacturing, and pharmaceuticals can all benefit from integrated live-streamed #data to prolong the health of machines. @SmartconnectIot via @insightdottech

A Future with Streaming Data for Asset Management

The basic thesis behind SmartConnect, making more relevant data analytics available in real time and sending text-based alerts if needed, is not confined to just a couple of use cases. As time is of the essence, response and maintenance services can be redesigned to higher efficiency levels.

The sky’s the limit for implementations. Asset-heavy industries like cement, steel, mining, pulp and paper manufacturing, and pharmaceuticals can all benefit from integrated live-streamed data to prolong the health of machines.

Areas of operation may include filling lines, packing of goods, and assessing machine states and possibility of failure.

Equally important, moving away from difficult-to-master programmable logic controllers to easier no-code software plus microprocessor-based gateways makes automation more widely accessible to manufacturers everywhere. “We’re bringing automation capabilities for the end user around the world,” Sesay says.

Equitable access to automation through no-code and lower-cost software will upend how and where the world manufactures its goods. And data-driven operations will mean the most efficient processes possible. With the democratization of data insights, companies big and small will no longer have to fly blind.

 

Edited by Georganne Benesch, Editorial Director for insight.tech.

All-in-One Medical AI PCs Meet Healthcare Computing Needs

In the healthcare industry, there are companies that build medical equipment and providers that use these machines and devices. While the business models of these two groups are completely different, they are guided by the same challenges and opportunities. Both are eager to deploy the latest technologies but are faced with strict regulatory requirements and short product life cycles.

A traditional medical device is designed with data residing locally on the device. With the increasing demand of interoperability within healthcare facilities, healthcare professionals can improve efficiency effectively with AI-enabled medical PC solutions designed to sustain the mission-critical environment and process patient data throughout the treatment process.

Managing all these constraints is a tall order, but today’s medical computers are up to the task. Hygienic, compact, and portable, medical-grade AI computers can be used by practitioners throughout hospitals and clinics. And high-performance processors enable near-real-time AI analytics, helping doctors and nurses make faster, better-informed diagnostic and treatment decisions.

#EdgeAI and #ComputerVision have become increasingly important to today’s imaging and patient monitoring machines, which can swiftly analyze #data and support physicians with diagnoses. @OnyxHealthcare1 via @insightdottech

Keeping Up with AI Innovations in IoT Medical Devices

Edge AI and computer vision have become increasingly important to today’s imaging and patient monitoring machines, which can swiftly analyze data and support physicians with diagnoses. But for medical device development, incorporating these cutting-edge capabilities can be a struggle. Medical device development takes on average from eight to 24 months to implement hardware and software design changes in accordance with regulations, and another two to three years to obtain certification via clinical trial.

“They don’t have the luxury to continuously upgrade to the latest technology,” says John Chuang, President of Onyx Healthcare, Inc., a Taiwan-based global producer of medical PCs and hospital IT solutions.

And once those finished medical devices are released, they need to stay in service for a long time. Hospitals have a complex mix of technology, and don’t usually upgrade their equipment for 10 years or more—an eternity in the fast-moving world of medical AI and computer vision development.

To keep machines as up-to-date as possible, Onyx collaborates with medical device companies, hospitals, and Intel, which supplies the processors for the all-in-one (AIO) medical computer the company produces for hospitals and clinics. Intel high-performance processing power is the key that enables software to run edge AI analytics.

Working closely with Intel, Onyx can provide a scalable custom design that allows medical device companies to incorporate the latest processors into its medical-grade computing technology. “By providing the latest technology to medical OEMs and ODMs, we help them keep a step ahead, so they don’t have to worry their technology is outdated by the time their devices are launched,” Chuang says.

Delivering Machine Information Where It’s Needed

In hospitals, medical devices are part of an elaborate symphony that requires precise timing and coordination. Doctors rely on information from many sources to diagnose and treat patients, including medical records and lab results, blood pressure and oxygen monitors, and images from X-ray, CT, and ultrasound scanners. But since these machines are made by different manufacturers and use different software protocols, they typically don’t connect with one another—or with hospital IT systems. As a result, doctors often must examine disjointed patient data.

A system like the Onyx AIO medical AI computer serves as a symphony conductor, integrating data from all sources—including patient records and off-site machines. It enables the transmission of high-resolution images and the performance of AI analytics, giving doctors a comprehensive, near-real-time view of a patient’s condition.

“The data transmitted is informative enough for physicians to make sound, timely treatment decisions. That’s especially crucial for patients in critical care, and in situations where the doctor needs to determine whether surgery is required,” Chuang says.

The ONYX AIO AI computer is also designed to meet hospitals’ rigorous sanitary requirements. For example, instead of using a fan for cooling, it uses an onboard heat sink, creating a closed system that won’t transfer germ-carrying air into hospital corridors or patient rooms. “We are able to use a fanless design because of the efficiency of low-wattage Intel processor technology,” Chuang says.

Medical IoT in Action: Mobile Nursing and Telehealth Solutions

Connecting patient information via medical computers can help hospitals and clinics achieve greater interoperability. That’s an important goal for the CAIH, a French government alliance formed to consolidate technology requirements across the country’s hospital networks. Onyx developed two solutions to help the organization achieve its objectives.

The first is mobile nursing stations—carts containing an AIO AI computer that nurses can bring on their rounds. The medical computer enables them to keep an eye on every patient under their care as they go from room to room. In addition to keeping nurses apprised of patients’ vital signs, the AIO helps monitor equipment, letting nurses know, for example, if an IV is running low on fluid.

AI monitoring helps short-staffed hospitals better attend to patients’ needs, Chuang says. It also helps them deal with the fast-growing use of telehealth. In a second solution it developed for the CAIH, Onyx enables AIO computers to connect doctors with patients, caregivers, and medical equipment at remote facilities—including skilled-nursing homes, where a physician may not be present.

Doctors can view patients from their own AIO computer and guide nurses in using medical instruments, such as portable ultrasound machines or scopes for examining the ear, nose, throat, or skin. Devices are equipped with high-definition cameras that relay medical-grade images to the doctor.

“With this information, physicians can do some diagnostics and quickly determine whether a patient needs to come to the hospital right away,” Chuang says. Otherwise, many would have no choice but to be transported there—often a challenge for those in a skilled-care institution.

Onyx AIO computers are also enabled for 5G communications, allowing remote facilities with a 5G network to relay alerts for patient vital data or slip-and-fall accidents directly to doctors or nurses, instead of waiting for the information to be processed in the cloud.

Building Future-Ready Technology

As AI capabilities expand, medical computers are assuming a greater role in patient care. But to stay useful, they must evolve along with the machines they connect with, Chuang says.

“Medical computers need to become more like medical devices themselves. We’re seeing greater demand for them to interface with specialized machines, and demand for data processing speed is also increasing. By building the latest Intel technology into our computers, we are able to satisfy those needs,” Chuang adds.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

AI and Computer Vision Boost Biomedical Research

Breakthroughs in biomedical research often come from understanding correlations and causality—the what and how of the body’s physiological processes.

Scientists make observations such as a higher rate of cancer or better reaction to vaccines by correlating data sets. They then research underlying reasons for these correlations. Systematically plodding through these cycles of biomedical research is tedious but rewarding work.

Now AI-driven computer vision applied to medical imaging accelerates discovery of data correlations. It finds problem points worth exploring much more quickly. In doing so, AI helps humans zero in on problems faster. And that can help human scientists arrive at life-changing medical solutions much more quickly.

AI Models in Medical Imaging

One use case is how Mayo Clinic uses AI and machine learning to profoundly enhance the capabilities of ultrasound imaging. As a starting point, the medical institution uses the latest technologies, tools, and products from Intel and Dell—the Intel® Geti platform and the Intel® OpenVINO toolkit running on Dell edge systems—to find kidney stones from endoscopy videos of the organ, and to assess Point of Care (POC) Ultrasound images for cardiac function.

Mayo Clinic’s work in the use of AI ultrasound imaging is a particularly useful case of the technology, says Alex Long, Global Head of Life Sciences Strategy at Dell Technologies, a solutions provider that offers an extensive product portfolio and comprehensive services. For too long, interpretation of ultrasounds has been subjective, prone to errors, and requires specialized training.

Visual #AI models, trained on banks of #data, can help providers offer more personalized care at the bedside. Augmenting care with AI can find anomalies faster, more accurately, and with minimal training. @DellTech via @insightdottech

But visual AI models, trained on banks of data, can help providers offer more personalized care at the bedside. Augmenting care with AI can find anomalies faster, more accurately, and with minimal training. Modern approaches which leverage pre-trained models and active learning enable the rapid development and deployment of these models. “Our care providers understand the benefit of using AI to aid in patient care, but in cases like the POC ultrasound, there wasn’t a viable AI model available,” says Dr. David Holmes of Mayo Clinic. His team of engineers leveraged interactive AI modeling tools to rapidly develop an AI solution that assess the quality of the images at the bedside in order to ensure the best images are used in the patient care.

The use of AI in medical imaging is about more than its capacity as a diagnostic tool. “It’s about leveraging visual AI to interpret imaging data and to accurately augment the capabilities of the human,” Long says. Diagnosticians trained to sift through files to find problems—evaluating mammograms to find early signs of breast cancer is a good example—can also benefit from AI guiding them to more places to evaluate. The advantage of AI is that it finds patterns that the human eye, due to confirmation bias, might miss.

A variety of additional scenarios in biomedical research can benefit from AI, especially if they involve imaging data. “It turns out there’s a lot of other medical systems that are visual in nature,” Long says. And they could all benefit from using AI as a tool to augment human abilities.

Collaboration Propels Innovation

A partnership between Intel and Dell Technologies enables these AI-driven breakthroughs. “The definition of community is a group of people with like-minded aspirations who are trying to achieve a goal together,” Long says. “We’re seeing a healthcare life sciences community being born between Dell and Intel.”

Collaboration between the two companies has evolved organically over many years, and Dr. Holmes’ work is one example of how the two bring their strengths to the table. The companies’ healthcare solution teams and their technology and product platforms enable collaborations with leading biomedical researchers and providers.

“The depth of our portfolio, the depth of our partnership, and the expertise in IT and infrastructure required to deliver” are what Dell brings to the table, Long says. In addition, Dell keeps in mind that the healthcare industry places heavy emphasis on privacy and protection of sensitive patient health information. “It’s not just about technology adoption to mitigate costs,” Long says, “it’s about technology to advance the human initiative of improved health. We’re passionate about really advancing the care of human beings.”

The Future of AI in Healthcare

The Mayo Clinic use case offers a glimpse of what is possible with AI models in biomedical research. We are just beginning to explore ways that AI can find correlations on visual imaging data, directing humans to new avenues for further exploration.

Researchers almost always try to find correlative data to drive conclusions, and “if you want something to identify a correlation, there’s nothing better than AI,” Long says. “I’m very excited about AI’s potential in accelerating diagnostics, improving patient care, and rapidly getting to understand the next wave of heuristics and treatments.”

When it comes to the human body, there’s a lot left to discover. It’s an exciting time to work at the intersection of technology and medicine because the volume of discoveries that AI can facilitate is simply mind-boggling. AI can train its eyes on years of data. The results are likely to be nothing short of revolutionary.

 

Edited by Georganne Benesch, Editorial Director for insight.tech.

Machine Builders Gain an Edge with Next-Gen Products

Edge AI and computer vision technologies are finding new use cases in nearly every industry. In the factory, applications like automated optical inspection (AOI) and industrial robotics improve operational efficiency. In healthcare, these technologies augment medical imaging and diagnostics. And they enable smarter traffic management in our cities, and enhanced security in our offices and public spaces.

This adoption of AI in so many different sectors also changes the outlook of business leaders. AI is no longer seen as some promising technology on the far horizon. AI already is beginning to deliver positive outcomes for organizations of all types. “The implementation of AI in real-world scenarios is currently happening,” says Christine Liu, Product Manager at Advantech, an edge AI platform and AIoT solutions provider. “Decision-makers today view AI as a ‘must-have’ in order to remain competitive.”

It’s a time of great opportunity for AI solutions developers, but they face challenges that need to be overcome, such as adopting AI computing solutions, integrating software SDKs, AI model training, and so on.

The good news is that embedded hardware partnerships enable powerful, development-ready AI computing with products like Advantech’s GPU Card EAI-3101, designed with the Intel® Arc A380 GPU. GPUs primarily offer visual image optimization and are currently one of the primary AI accelerators used to enhance AI computing power. 

The Latest Embedded GPU Card Supports Multiple AI Use Cases

Advantech’s new lineup of edge processing hardware is a case in point. The company will show these products and more at embedded world 2024 in Nuremberg, Germany.

The EAI series product line offers comprehensive AI acceleration and graphics solutions, including several PCIe and MXM GPU cards with Intel Arc Graphics. With the coming launch of Intel Arc A380E, Advantech offers EAI-3101, a new embedded PCIe GPU card powered by Intel Arc A380E with five-year longevity. Featuring 128 Intel Xe matrix AI engines, this GPU card delivers 5.018 TFLOPS AI computing power. With optimized thermal solutions and auto smart fans, these GPU cards can fulfill different use cases, such as gaming, medical analysis, and more. The designs are proven to outperform the competition in AI inference capability and graphics computing.

The diversity of choices means that OEMs, ODMs, and machine builders are more likely to find a computing platform to suit their needs, regardless of intended use case. Machine builders for the industrial sector, for example, would most likely select one of the commonly used PCIe configuration cards—while the smaller form factor and shock and vibration resistance of the MXM card might appeal to manufacturers of medical devices.

“Intel® Dynamic Power Share Arc GPUs and Intel CPUs can automatically and dynamically (re)distribute power between processing engines to boost performance depending on the use case—providing stable, high-performance computing for all kinds of edge workloads,” says Liu. “And the Intel® OpenVINO toolkit helps us accelerate AI inference times, reduce the AI model footprint, and optimize hardware usage.”

Advantech’s #development partnership with @Intel enables the company to bring the latest Intel products to market faster since it has early access to Intel’s latest-generation #processors. @Advantech_USA via @insightdottech

Advantech’s development partnership with Intel enables the company to bring the latest Intel products to market faster since it has early access to Intel’s latest-generation processors. This benefits Advantech customers even when they already have existing solutions in full deployment. For example, ISSD Electronics, a maker of intelligent traffic management solutions, deployed a smart traffic management system in Turkey and recently upgraded the solution to incorporate Advantech’s EAI-3100 series. As a result, the company has already improved its system’s accuracy, reduced AI inferencing latency, and cut construction costs by 33%, says Liu.

Advantech is also announcing new models in its AIR series of edge AI inferencing appliances:

  • AIR-150: compact, fanless edge AI inferencing system based on 13th Gen Intel® Core processors
  • AIR-310: edge AI box with MXM-GPU card supported by 14th Gen Intel® Core processors
  • AIR-510: edge AI workstation based on 14th Gen Intel® Core processors with RTX 6000 Ada

These edge AI platforms adopting the latest Intel platform fit in many different scenarios. Businesses might opt for the relatively lightweight AIR-150 for their offices. To achieve factory AMR automation management, the AIR-310 provides industrial protocols and scalable GPU computing power needed. And for creating a computer vision-assisted medical imaging solution that would likely have heavier graphical computing requirements, the more robust AIR-510 is the right fit.

Leveling the Playing Field for AI Application Development

Alongside its hardware products, Advantech offers a cross-platform edge AI software development kit (SDK). The SDK provides benchmarking tools to evaluate an AI application’s hardware requirements early in the solution development process. This helps developers select the best hardware for their solution—and prevents them from overspending on excessive computing power. In addition, the SDK enables real-time monitoring and over-the-air (OTA) AI model updates post-deployment.

As part of the SDK, OpenVINO provides model optimization and hardware acceleration benefits. The open-source inferencing toolkit also helps AI developers simplify their model deployments and software development workflows by supporting multiple AI model frameworks, including PyTorch, TensorFlow, and PaddlePaddle.

The availability of open-source toolkits and SDKs, coupled with a mature edge AI product ecosystem, will help more machine builders, OEMs, and ODMs to compete more effectively with a stable, development-ready AI computing environment. They help shorten the overall solution development time and allow designers to get innovative products to market faster.

Advantech also offers Edge AI SDK, the AI toolkit, to build an friendly environment from evaluation, SDK adopting, to deployment on all EAI and AIR series products as mentioned above

In the coming years, then, expect to see a far more level playing field for AI application development—what some have called “the democratization of AI.”

In Liu’s view, this is the correct path forward for our increasingly AI-enabled world. “The power of AI shouldn’t be limited to just a few companies. Resources such as our edge computing platforms, our SDK, and OpenVINO are there to be leveraged by everyone,” she says. “AI will be everywhere in the future—which is why we need these open and powerful solutions.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

AI-Powered Manufacturing: Creating a Data-Driven Factory

Imagine being able to predict machine failures, detect defects, prevent costly downtime, and ensure working safety in real time. That’s exactly what AI-powered manufacturing aims to do.

It’s no longer just about efficiency. AI revolutionizes the factory floor, boosting product quality, reducing waste, and personalizing training for higher productivity. Is your factory ready to take on these changes?

Join us as we explore the opportunities and challenges of embracing and integrating AI in manufacturing. We address concerns, share success stories, and equip you with the knowledge to build a smarter, safer factory.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Our Guests: AllGoVision and Eigen Innovations

Our guests this episode are Aji Anirudhan, Chief Sales & Marketing Officer at AllGoVision, AI video analytics company; and Jonathan Weiss, Chief Revenue Officer at Eigen Innovations, industrial machine vision provider.

Prior to AllGoVision, Aji was the Vice President of Sales and Marketing at AllGo Embedded Systems and Manager of Sales and Business Development for elnfochips. At AllGoVision, he focuses on the product strategy and growth of the company.

Before joining Eigen Innovations, Jonathan served as the Global GTM Leader in Industrial Manufacturing for AWS Strategic Industries and was the Vice President of Emerging Technologies at Software AG. As CRO for Eigen, Jon oversees revenue generation activities, and drives machine vision software and engineering sales.

Podcast Topics

Aji and Jon answer our questions about:

  • (3:26) Industry 4.0 challenges and pressures
  • (6:19) Safety risks for factory workers
  • (10:29) Creating a data-driven factory
  • (15:48) Ongoing factory floor transformations
  • (18:18) Data-driven factory strategies
  • (20:26) Industrial AI video analytic use cases
  • (25:31) Industrial machine vision examples
  • (30:07) Manufacturing opportunities up ahead

Related Content

To learn more about AI-powered manufacturing, read Machine Vision Solutions: Detect and Prevent Defects. For the latest innovations from AllGoVision and Eigen Innovations, follow them on Twitter/X at @AllGoVision and @EigenInnovation, and on LinkedIn at AllGoVision and Eigen Innovations Inc.

Transcript

Christina Cardoza: Hello and welcome to the IoT Chat, where we discuss the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re talking about AI and manufacturing—everything from defect detection and prevention to worker safety. We’re going to look at how AI continues to transform the factory floor.

And joining us today we have two expert guests from Eigen Innovations and AllGoVision Technologies. So, before we get started, let’s get to know our guests. Jon, I’ll start with you from Eigen. Please tell us more about yourself and what you do there.

Jon Weiss: Yeah, wonderful. Thanks, Christina. Great to be here, and thanks for having me. My name’s Jon Weiss. I’m based here in Greenville, South Carolina, and I work for a company called Eigen Innovations. I’m the Chief Revenue Officer, so essentially responsible for really everything that’s customer facing in front of the house here.

Eigen has a simple mission. It’s quite complex sometimes, but it’s really simple and straightforward. We want to help manufacturers all over the world not just detect defects but prevent defects, to ensure that they make the highest standard quality parts every single time they make them.

Christina Cardoza: Great. Great to have you here. And Aji from AllGoVision, please tell us more about yourself and the company.

Aji Anirudhan: Thank you, Christina for inviting us for this podcast. So, my name is Aji Anirudhan. I have a global responsibility with the company AllGoVision. I head the sales and marketing for the team. I’m also part of the management looking at the product strategy and growth of the company. And to just give background about the company: AllGoVision, we have been in business for 15 years, always focused on developing video analytics–based software products. We have been doing this, and implementation has been done across different segments.

One of the segments is right from smart cities to critical infrastructure to retail to airports. And this implementation has been in different markets worldwide. And with respect to the implementation, right now our focus is moving away from—it’s not moving away, but evolving from giving security or a people-based solution toa more-safety-focused solution, which is in the manufacturing sector.

That’s one focus area, which we—that’s where we are building solutions, are trying to enhance our solution support, especially workplace safety and productivity, how we can enhance this for different customers worldwide in manufacturing, warehousing, or different other industry segments.

Christina Cardoza: Great, yeah. I’m looking forward to hearing more about that, and especially how companies can leverage some of their existing solutions, like you said, like security solutions that they have and really transform it to provide even deeper insights and more benefits to the company.

So, but before we get there, Jon, I wanted to start off with something you said in your introduction. You guys are not only helping with detection but prevention of some of these quality issues. And so I’m wondering, obviously this is a driving factor of Industry 4.0. Manufacturers are really under pressure to transform and evolve and take care, take advantage, of some of these intelligent capabilities out there.

So I’m wondering, from your perspective, what challenges have you seen manufacturers have to deal with because of Industry 4.0, and how do things like machine vision and AI, like you mentioned, help address some of those challenges?

Jon Weiss: Yeah, absolutely. So it’s important to understand one of the key challenges in what we do here at Eigen—and really in the industry as a whole, if you think about traditional vision systems, and by the way that’s all we do. We do machine vision for quality inspection; that’s what we do. And we’re hyper-focused in manufacturing, by the way, that’s not just a vertical of ours; that’s all we do is industrial-manufacturing and process-manufacturing quality inspection.

Now historically, traditional vision systems really lend themselves to detect problems within the production footprint, right? So, if you’re making a product, traditional vision systems will tell you if the product is good or bad, generally speaking. You may wonder, well then how on earth are you able to help people prevent defects, right? Not just tell them that they have produced a defect.

And that’s where our software gets pretty unique in the sense that we don’t just leverage vision systems and cameras and different types of sensors, but we also interface directly with process data—so, historians, OPC UA servers, even direct connections to PLCs at the control-network level, meaning we don’t only show people what has been produced, we give them insights into the process variables and metrics that actually went into making the part. So we go a step further from just being able to say, “Hey, you have a surface defect.” Or, “You have some kind of visible or nonvisible defect.” But we’ll also show people what went wrong in the process, what kind of variation occurred that resulted in the defect.

Now, to answer your question, how does AI and vision systems play a role? Well, naturally everything we do is vision-system based; but a lot of what we do is also AI and ML based. You see this a lot in our thermal applications. For example, how we help metal and plastics companies inspect welds for both metals and plastic processes to determine with very high confidence whether or not they have a good or bad weld. We use AI and ML for a lot of that type of capability here.

Christina Cardoza: Great. And that’s obviously a big competitive advantage, that quality assurance aspect of it. And it’s great to see that these technologies like machine vision can be used to not only take the burden off of workers but to help them pinpoint some of the problems and really improve their operations. But I’m sure it also creates a lot of pressure for the manufacturers, for the people on the factory floor—making sure everything is as perfect as can be, and all of the processes and operations are efficient as possible.

So, Aji, can you talk a little bit about how some of these challenges to quality assurance or to improve the factory floor, how some of that puts pressure on workers. And what are the risks that you see involved with regards to workers in Industry 4.0, and how can the video analytics that you were talking about address some of those?

Aji Anirudhan: So, Industry 4.0: the primary thing, as you say, is how do you enhance the automation with this industry? How do you bring in more machines? How do you actually make the shopper more effective? But as we all know, the people are not going away from the industry and the factory, which basically means that there is going to be more interaction between people and machines within one.

To just give you a context, the UN has some data which says that worldwide companies spent $2,680 billion as an annual cost of workplace injuries and damages. Which basically means that this cost is going to be a key concern for every manufacturer. So traditionally what they have done is they have looked at different scenarios where there were accidents, different hazard situations which have come, and come out with policies to make sure that doesn’t happen, and to investigate whenever it happens to make sure that that policies are updated.

What we are trying to do is that’s not enough to actually bring this cost down. There could be different, other reasons why these accidents are happening. So you have to have a mechanism to actually make sure that you do a real-time detection or real-time monitoring and compliance of these policies to make sure that the accident never happens. That means if an employee who is on a shop floor is supposed to wear a hard hat and if he is not wearing a hard hat, even though the accident doesn’t happen, we’ll make sure that that is identified and reported back so that the frontline managers make sure that this is being taken care of immediately so that a potential accident can be avoided.

So what we are trying to look at is any event with—any scenarios where a workplace, a worker, is following the policies, or looking at a scenario which is otherwise not anticipated which can create a potential accident. We continuously monitor and alert and give data for the managers, the EHS safety inspectors to make sure that they’re addressing this in real time, updating the policies, training the people so that we—so the solution is not just replacing the existing models or existing policies, but enhancing the policies to give them a real-time insight so that they can generate enhanced safety policies.

So that—two ways to help this in factory scenarios: it can enhance the policies to reduce accidents; it can also make sure that the compliance—which needs to be met by the safety inspectors—it becomes much easier for them. And, again, the bottom line: reduce accidents means reduced insurance costs; that adds to the top line/bottom line for the companies. That’s what we are trying to actually bring. And it is based on different other—different AI- ML-based algorithms, as well as some of the applications which we see very specific to each industry.

Christina Cardoza: Yeah, absolutely. And, to your point, it’s very important that the people that are still on the manufacturing floor, they are following these safety procedures, and are making sure that everything is running smoothly on the floor so that it continues to run smoothly. And, to your point, this idea of Industry 4.0 that’s involving more machines, more autonomous robots being integrated in the floor—so you really need to make sure that this is all working cohesively together and pinpoint any issues that you may find.

You both mentioned data: data is going to be really important in, end-to-end, adding these advanced capabilities, making sure that they are running smoothly, operating smoothly, they’re picking up the quality insurance aspects, they’re not missing anything—from things on the product line to people on the factory floor.

And so, Jon, I’m wondering, looking at that data aspect and creating this data-driven factory, how can manufacturers begin to set this up—just looking at some of the value that it creates having this data-driven factory from end to end?

Jon Weiss: Yeah, it’s a good question. Before I answer that, I’m just going to piggyback on one thing that Aji said, because it is really important where he started his explanation by saying that the people—people aren’t going away, right? So we need technology to keep people in this world safe, right? It’s: keep people safe and ensure that they can do their job effectively while protecting themselves.

So it’s interesting, because in our world, although we stay hyperfocused on what’s being made—looking at the quality of the product or the part that’s being made—there’s also the same idea that people aren’t going away, right? And I think that is a common misconception in a lot of—especially these days—in a lot of artificial intelligence–type discussions where that’s what makes up most of the headlines: AI is going to replace you; it’s going to take your job away, and all this kind of stuff.

And I think it’s important to talk about that for a second, because what we see in the world of quality is actually the exact opposite of that. We’ve had some amazing discussions with our customers in various types of factories, really all over the country and even the world. And what we find in the quality realm is by bringing vision systems and software tools to the hands of these folks in factories by enabling them to inspect parts faster—and oftentimes at second and sub-second intervals, whereas it used to take minutes or sometimes even longer than that per part—now they’re able to produce more, which means they’re actually able to hire more people to produce more parts in a given shift.

And so it’s been really interesting to see that paradigm where I think there’s a lot of FUD and fear, if you will, around replacing people with this. But actually we see the opposite in our world where it’s actually empowering manufacturers to hire more people to produce more. So just a really interesting point on that that I wanted to mention.

That said, now I’ll answer the question around the data, the significance of it in organization, how to get started; I think that was the question. And when you think about Industry 4.0, holistically there’s a lot that goes into that. What Aji and myself do, we’re kind of small pieces of a much larger puzzle; but there is one common thread in that puzzle, and it’s really data, right? It’s all powered and connected by data. That’s how we drive actionable insights or automation, right? That that’s how we were able to do a lot of what Industry 4.0 promises, and the way organizations typically are successful in large scale digital transformations in Industry 4.0 is by really creating a single source of truth for all production data.

So, many years ago we called this “data lakes,” “data warehouses,” things like that. Then it kind of turned into style architectures. And these days now it’s really the—what’s been coined as the unified namespace, right? UNS is what a lot of people talk about now. But, simply put, it’s a single place to put everything—from quality data, process data, safety data, field-services-type data, customer data, warranty information—like all of this kind of stuff. You put that all into a single place, and then you start to create bi-directional connections with various enterprise-grade applications so that ERP knows what quality is looking at and vice versa, right?

This is how you get into automated replenishment of consumables and inventory management, material flow, all this kind of stuff. I know it’s a lot and I’m going fast, but it’s a real—that’s a such a loaded question. Oh my goodness—we could spend a whole hour talking just about that, but hopefully that makes sense. It really all starts with single source of truth, and then having the right strategy and architecture to then implement various types of software into that single source of truth for the entire industrial enterprise. Hopefully that makes sense.

Christina Cardoza: Absolutely. And we probably could have had our entire conversation just be around the data-driven culture of manufacturers, and I agree with what you said earlier about people are often afraid to implement AI; they’re afraid it’s going to take their jobs and how it’s going to be implemented. I would argue that in the industry sometimes there’s a lack of job skills available.

So AI really helps replace some of these mundane tasks that we don’t really have enough people or the labor there to do that. And then, for the people that we do still need for the manufacturing floor, it’s a little bit of a better job-life experience. You’re able to focus on more priority important tasks rather than these mundane issues, and it’s a little less error prone, so hopefully less stressful on the worker for having to find these incidents.

But going back to the data-driven idea of the factory—obviously we’ve been talking about Industry 4.0 for a couple of years now. Everybody knows the benefits of it, everybody wants all the data to gain that value and make those better-informed decisions. But do you feel like manufacturers are there today? Are they prepared to really take on this idea of the data-driven factory, Jon? Or is there still some education or learning that they need to do, still some more transformations that need to happen on the manufacturing floor?

Jon Weiss: Yeah. Well, you know, quite frankly, I don’t think anybody’s an expert in all facets of Industry 4.0, whether it’s a manufacturer or a vendor, because it’s such a vast topic. I do think you have experts for certain portions of it. But it really is a really wide topic.

Now, I’ll say manufacturers as a whole I think are on board generally speaking with the need to digitize, the need to automate. I think there’s no doubt about that. I do think there’s still a lot of education that has to take place on the right way to strategically go about large-scale initiatives—where to start; how to ensure its effectiveness, program effectiveness and success; and then how to scale that out beyond factories.

That’s still a problem for even some of the most mature industrial organizations on the planet. How do you get one—in my world it’s a vision system, right? So in my world it’s trying, it’s helping industrials overcome the challenges of camera systems being siloed and not communicating with other enterprise systems and not being able to scale those AI models across lines, factories, or even just across machines. That’s where traditional camera systems fail. And we’ve cracked that nut. So it’s an exciting time to be a part of that journey, that’s for sure.

Christina Cardoza: And, like you said, no one is an expert in this space, and there’s a lot of different pieces going on. We have AllGoVision from an AI video-analytics perspective, Eigen from the machine-vision defect detection and prevention perspective. Obviously insight.tech and the IoT Chat as a whole—we are sponsored by Intel.

But I think to do some of this, to really create that data-driven factory and to make some of these capabilities happen, it really is important to have partners like Intel to help drive these forward. I can imagine with the AI-driven video analytics—that’s collecting a lot of the data that manufacturers do need. And I can see partners like Intel being able to work with partners like Eigen and AllGoVision to make some of that data—get that data fast, make it possible, make it valuable that people can actually read through it and find what’s important.

So, Aji, I’m curious from your perspective, what’s the importance of partnerships like Intel, and how is that helping bring some advantages of video analytics to manufacturing?

Aji Anirudhan: Well, definitely. I’m saying we, as we said, the company has been there for 15 years now; we’ve been offering a video-analytics solution. Intel has been one of our first partners to actually engage with, to actually run our algorithms. And we have—from there we have grown over a period of time. We were one of the first partners or first video-analytics vendors to actually embrace their open-window architecture, because when we moved our algorithms to a deep learning–based model, this was very easy for us to actually port it to different platforms, which Intel was prime.

And over a period we have been using Intel processes right from the early version, right now to Gen4 and Gen5. And what we’ve seen is a significant performance improvement for us. I mean like the number of codes which we require on Gen4 and Gen5 is much, much optimized, lower than what we had used before. That that is very advantageous. That’s what Intel is doing in terms of making platforms available and suitable for running DL-based models, is very good for us. It’s very important because we do different use cases simultaneously, which means we can’t have a lot of servers; we want to optimize on the cost per channel. So that way Intel is a good partner for us.

And now some of the new enhancements they’re doing, especially for running deep learning algorithms, like their integrated GPUs or the new Arc GPUs which are coming in—we are excited to actually see how we can optimize between the processes and the GPU to actually make it more effective to run our algorithm. So, yes, Intel is a key partner with respect to our strategy currently and going forward, and we are very happy to actually engage with them in terms of different customers or different products and different use cases.

Christina Cardoza: Yeah, and talking about those different use cases, that you’re going to need a lot of those servers or the power from partners like Intel behind to make those happen. Can you talk about some of the advantages of AI-driven video analytics and manufacturing in addition to worker safety? What are those use cases that video analytics can bring? And if you have any customer examples or use cases that you can provide, that would be great also.

Aji Anirudhan: So I’m saying, as I’ve said, we have been engaged with different manufacturing setups even from our early stages, in which we started looking at worker safety. So there were different use cases—like definitely the security and prevention of other safety requirements within the plan, right? From things like access to the plan, restricted access to the plan, making sure that only the right people are coming in, and making sure that the location where the manufacturing happens is clear of obstacles.

There’s a lot of use cases with respect to operational-security things, which was always a use case. Then we looked at, when we worked on things like what we call inventory management, basically saying that once a production happens and then it goes back to inventory, how do we actually track the inventory with respect to vehicles, with respect to size of the loading. Those are things which are beyond worker safety that we have been looking at.

And that is linked to the supply chain as well. Then more to with—so one of the new use cases which we’re coming in is how do we actually manage predictive maintenance of machines? I think this is an area which we are working on now, this use case, which is coming from a customer. See, for example there was—I think this is a utility company, very interesting use case, where they wanted to use our algorithms to monitor their big machines through thermal cameras to actually make sure that the temperature profile of those machines didn’t change over a period of time. If it changes, it means something has to go for predictive maintenance. So this is another area where we see a lot of applications coming in.

And work safety definitely is evolving, because what we see in worker-safety requirements for one specific customer—electric, oil, and gas is different from what we see in a pharmaceutical company—so the use cases, the equipment which they use, the protective gear they need to actually deploy, and the plan-safety requirements which they have. For example, we were working with a company in India where they have this hot metal which is part of their production line, and there are instances where it gets spilled. It’s hugely, heavily hazardous, both from a plant-safety as well as a people-safety point of view. They just want to make sure that this is continuously monitored and immediately reported if there is anything. It’s a huge cause, and it is a production loss if it happens. That’s one thing.

And then we work with multiple oil and gas companies where a couple of the requirements include making sure there’s an early detection of fire or smoke within the plant. So we have a foreign-smoke solution which we are continuously enhancing to make sure that we do that. And I also want to look at the color of the flame which comes out while it is burning something, and to make sure that that color is detects certain behavior of what chemicals is burning.

So these are—some of them are experimental, some of them, they were the standard thing which we can do. So, use case-wise, a combination of different behavioral patterns of people, to interaction between people and machines or people and vehicles going within the industrial-manufacturing segment are bringing in new use cases for us. So this one—some of them we have implemented, some of them we are working on a consulting model with our customers to make sure that we bring in new algorithms, or enhance our algorithm and train them to actually address their use cases.

Christina Cardoza: And I’m just curious—because I know we talked in the beginning about looking at some of these solutions or these infrastructure, these cameras that manufacturers already have, like beyond the security systems—so is it the—to get these use cases, are a lot of manufacturers leveraging the video cameras that they already have existing on the manufacturing floor and then adding additional advanced capabilities to it?

Aji Anirudhan: Yes, yes. I’m thinking most of the factories are now covered with cameras, CCTV cameras, for their compliance and other requirements. We are going to ride on top of that, because our requirements easily match with the in/output coming from these cameras, and then we look at the positioning of the camera, and then maybe very specific use cases require a different camera with respect to maybe a thermal camera there, maybe the position of the camera or the lighting conditions.

So those are things which are which enhanced. But 80% of the time we can reuse existing infrastructure and ride on top of the video feed which is coming, and then do these use cases with respect to safety, security, or other people-based behavioral algorithms.

Christina Cardoza: That’s great to hear, because it sort of lowers the barrier of entry for some of these manufacturers to start adding and taking advantage of the intelligent capabilities and really building towards Industry 4.0.

I’m curious, Jon, from a machine-vision perspective, how can manufacturers start applying machine vision into their operations to improve their factory? And if you have any customer examples also or use cases of how Eigen is really coming in and transforming the manufacturing world.

Jon Weiss: Yeah. Well, holy cow, we have tons of use cases and success stories similar to Aji. We’ve been around, we’re a little bit younger—so I think 15 years for you folks. We’ve been around 14 years, so we’re the maybe—

Aji Anirudhan: I’m the big brother here.

Jon Weiss: The big brother, that’s right. But, yeah, we’ve got all kinds of success stories in the verticals that we focus in. Like I mentioned, all we do is manufacturing, but we do focus on a few different verticals. So, automotive makes up probably about 70% of our business, and then both with OEMs and tier one, tier two, and even end tier suppliers throughout the value chain. We also do a good bit in paper and packaging, as well as what we call industrials: so, metals, aluminum, steel. I’ll give you some success stories or use cases there that we’ve put into production environments.

But to answer the first part of the question: how do you get started? Well, it all starts by really—just like any other Industry 4.0 project, or really any project in general—you have to define the problem statement, right? And understand what is it that you’re trying to solve. I always recommend against adopting technology just for the sake of adopting technology. That’s how you get stuck in POC, or pilot purgatory as people call it, where you just—you end up with a science project, and it doesn’t scale out to production and it’s a waste of time for everybody involved.

So, start with a clear understanding of the business problem. What is your highest value defect that occurs the most frequently that you would like to mitigate? Maybe you start there, but it all starts by understanding what is it that you’re trying to see in your process that is giving you problems today.

In the world of welding it’s oftentimes something that the human eye can’t see. That’s why vision systems become so important. You need infrared cameras in complex assembly processes, for example. It becomes multiple perspectives that are important, because a human eye cannot easily see all around the entire geometry of a part to understand if there’s a defect somewhere, or it makes it incredibly challenging to find it. Same with very, very low-tolerance, sub-millimeter-tolerance-type geometry verifications for parts. There are things that are quite difficult for the human eye to see.

And so I always recommend starting with something like that—finding a use case that’s going to bring you the most value, and then kind of working backwards from there. Once you do that, then it’s all about selecting technology, right? So I always encourage people to find technology that’s going to be adaptable and scalable, because if all goes well it’s probably not going to be the only vision system you deploy within the footprint of your plant.

So it’s really important you select technology that’s going to be adaptable, lets you use different types of sensors. You want to avoid, typically, something that’s going to require a whole new vision-system purchase for a different type of inspection. Meaning, if today you want to do a thermal inspection, tomorrow you want to do, I don’t know, an optical- or a laser- or a line-scan-type inspection—you don’t want to be in a situation where you have to buy a whole new system again, right? That becomes very expensive, both from OpEx and CapEx perspectives. So I think if you follow that recipe, find something that’s adaptable, agile, flexible, and work backwards from a defined problem statement, I think you’ll be set up for success.

Christina Cardoza: I love what you said: it’s not just adopting the technology just to adopt the technology; you really should be adopting technology to solve a problem. And so it’s great to see partners like AllGoVision and Eigen—you guys are developing these systems not to just develop these systems, but because you see a trend, you see a problem in the industry and you want to fix it. And it’s great to see that these technologies that you guys are creating and deploying, they are, like you said, adaptable, interoperable, so that manufacturers can be confident that going with an AllGoVision or an Eigen Innovations. They’re really future-proofing their investments, and they’re going to be able to continue to evolve as this space evolves.

And, with that said, I want to put on some forward-thinking hats a little bit. Obviously we’ve been talking around Industry 4.0—I think a lot of people in the industry are already looking towards Industry 5.0. We’re not there yet, but we’ll probably be there before we know it. So as this AI space continues to evolve, what opportunities do you think are still to come or that we can look forward to? So, Jon, I’ll start with you on that one.

Jon Weiss: Yeah, I can’t help but laugh, because the buzzwords in this industry are just absurd. So I think we should probably figure out Industry 4.0 before we start focusing on 5.0. That’s just me personally though. I think a lot of manufacturers are still really just at the beginning of their journey, or maybe some are closer to the middle stage. But, yeah, I think there’s still a lot of work to do before we get to Industry 5.0, personally.

I’ll say that, forward looking, I think what’s going to happen is technology is just going to become even more powerful and the ways that we use it are going to become more versatile, right? There’s going to be a variety of things we can do. From my perspective, I see the democratization of a lot of these complex tools gaining traction. And so that’s one thing we do at Eigen. We build our software really from the ground up, with the intent of letting anybody within the production footprint, with any experience level, be able to build a vision system.

That’s really important to us, and it’s really important to our customers, and giving folks who may not be data scientists, may not be engineers, the ability to build out a product that’s going to tell them, or build out a dashboard or a closed-loop automation system that’s going to actually do something in real time to prevent bad product from leaving the line. That’s incredibly powerful. So I can only see that getting more and more powerful as time evolves.

Now, I would be remiss if I didn’t answer part of the question that I didn’t answer before, which was use cases. I got too excited telling you about all the other stuff, I forgot to tell you some use cases. So what I’ll do is I’ll tie this to this answer as well, if that’s okay.

So when you think about what the future holds, and I’ll phrase it like this: today we do really a variety of types of inspections, right? Just some examples: we do everything from inspecting at very high speeds, inspecting specialty paper and specialty coatings on paper to ensure that there’s no buildup on equipment. And this one example in particular in this specialty piece of machine that basically is grading the paper as it goes under it. You only have eight seconds to catch a buildup of about two and a half, three millimeters or so. If you don’t catch it in eight seconds it does about $150,000 worth of damage, okay?

And that can happen many, many times throughout the course of a year. It can even happen multiple times throughout the course of a shift, if you don’t catch it fast enough. And so when I think about what the future holds, we’re able to do that today: we have eight seconds to actually detect it and automate an action on the line to prevent the equipment failure. We do that in about one second, but it’s really exciting to think about when we do that in two-thirds of a second, half a second in the future—like the speed at which this stuff starts to execute, that’s exciting to me.

The other thing that’s exciting to me, when I think about the future of some of the sensor technologies, we also have use cases where we inspect fairly large surfaces. So, think about three-meter-wide surfaces that are getting welded, like big metal grates for example. And we’re inspecting every single cross section in real time as it’s welded. We use multiple cameras to do that, and then we stitch those images together, standardize them, and assess based on what we see.

And so it’s interesting to me to think, you know, could we in the future with more, let’s say powerful technology, could we inspect the whole side of a cargo ship fast enough,? During some kind of fabrication or welding exercise or painting exercise—something like that. So, thinking about like really large-scale assets, that’s kind of intriguing to me.

Christina Cardoza: Yeah, I love those use cases, because when you look at it from that perspective it really paints the picture of how valuable machine vision or AI can be in this space. You know, how much can go wrong and just simply adding working with partners like Eigen, adding these intelligent capabilities, it can really save you a world of hurt and pain and—

Jon Weiss: And it’s not just a nice-to-have, you know, just to tie this all back to the human element before I kick it back to Aji, just because we started on the human element. And so to bring this all back, one thing that’s really interesting to understand in this world of quality, from my experience what I’ve heard from a lot of my customers is actually they have the highest turnover in their plants, some of the highest turnover is within the visual-inspection roles. In some instances it’s very monotonous; it’s very—it could be an uncomfortable job if you’re standing on your feet for a 12-hour shift and you’re staring at parts going past you and you have your head on a swivel for 12 hours straight. And so as it turns out it’s very difficult to actually retain people in those roles.

And so this becomes almost a vitamin versus a painkiller sort of need, right? It’s no longer a vitamin for these businesses; it’s becoming a painkiller, meaning we’re helping alleviate an organizational pain point that otherwise exists. So, interesting stuff.

Christina Cardoza: Absolutely. And I totally agree with what you said earlier. We love our buzzwords, but I think that’s why it’s so important to have conversations like the one we’re having now, so we can really see where the industry’s at and how we can make strides to move forward and what is available.

Unfortunately, we are running a little bit out of time. Before we go, Aji, I just want to throw it back to you one last time, if there’s anything you want to add, any additional opportunities as AI continues to evolve that you want to talk about that’s still to come.

Aji Anirudhan: Yeah, I agree with what Jon said, that we—I think these technologies we’re talking about, especially worker safety, we are kind of enhancing the existing model, the workplace and environment. It is either establishing a worker is not colliding with another vehicle—all that. But the thing we are seeing is these technologies for each vertical or each manufacturing segment, it’s a little customized. And because there is a huge scene where you have machines, you have people, the people doing different things which are going to be there. So we have to detect this; we want to make sure the right decision is made, and we report back in real time.

So definitely it takes time for us to actually implement all the use cases for each vertical. But what is interesting—which is happening in the, what we call the AI world—is all the generative AI. And we are also looking at things, how we can utilize some of those technologies to actually address these use cases. So rather than going to 5.0, we ourselves are defining new use cases and utilizing the enhancement with the AI world that is happening.

Like what we talk about-large vision models, which basically look at explaining complex vision or complex scenarios and help us—see, I give an example. They say that when, if there is an environment where vehicles are moving and a person is not allowed to move, that’s not for a pedestrian, that’s for vehicle movement. But if the pedestrian—we were talking to a customer who says, “Yes, the worker can move through that same path if he’s carrying a trolley.” But how do you define if the person is with a trolley or without a trolley?

So we are looking at new enhancements in technology like the LVMs we talked about, which we will implement and bring out new use cases there. So that way technology which is happening, we talking about generative AI, is going to help us address these use cases in the factory in a much better way in the coming years. But to actually get into what the mark, the 4.0, requires, we still have a lot of things to catch up. We still have to look at each vertical and see things like behavioral, things like people-based activities to be mapped and trained so that we can give them a 90%, 95% accuracy when we are real time detecting the activities of people within this location.

So we are excited about technology, we are excited about implementation which is going on. So we look forward to much bigger business with various customers worldwide.

Christina Cardoza: Absolutely. And I look forward to seeing where else this space is going to go. Sounds like there’s still more to come, but there’s still a lot that we can improve and be doing today. So I invite all of our listeners to check out Eigen Innovations and AllGoVision websites to see how you can partner with them and they can help really transform your operations.

In addition, insight tech, we’ve done a number of articles on the two partners here today. So if you’d like to learn a little bit more about their various different use cases, they’re available on the website. But just want to thank you both again for the insightful conversation, and thanks to our listeners for joining this episode again today. Until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

MWC 2024: Private 5G Networks Take Center Stage

The transformative power of 5G was evident at this year’s Mobile World Congress (MWC 2024), held in Barcelona, February 26-29. MWC is the leading event for the connectivity ecosystem, with themes spanning across 5G and beyond, connecting everything, humanizing artificial intelligence, and manufacturing digital transformation.

Intel and a number of its ecosystem partners at the event showcased how 5G provides new opportunities for innovation and intelligence at the network edge.

For example, to help businesses build on their advanced network and edge AI analytic use cases, Intel announced its new edge-native software platform at MWC 2024. Intel’s Edge Platform modular, open software platform enables enterprises to build, provision, deploy, and manage AI applications at scale. It features edge-native infrastructure management, edge-optimizing AI inference, and simplified solution management.

Private 5G Network Edge Advantage

Global IT services company Wipro showcased the power of Intel’s Edge Platform in an Industry 4.0 private 5G demonstration based on its OTNxT, an end-to-end platform for OT and IoT infrastructure. Together with its 5G and AI components and Intel’s Edge Platform, Wipro and Intel demonstrated how to develop and deploy 5G-enabled AI solutions at scale.

The Wipro and Intel demo also showcased high-performance, hybrid deployments using the Intel® Distribution of OpenVINO toolkit and the Intel® Geti platform, and how to deploy 5G private network solutions using Intel® FlexRan reference architecture APIs from Intel’s Edge Platform integration.

Private 5G solutions like these are becoming extremely important as demand for meaningful data grows. Many businesses turn to private 5G network solutions that can provide coverage, data control, and costs they need to support data-intensive applications at the network edge.

“Intel’s new Edge Platform helps us solve the challenges of edge complexity on standard hardware and enables Wipro to deliver the most compelling use cases to drive business results,” says Ashish Khare, General Manager and Global Head for IoT, 5G, and Smart Cities at Wipro.

Other use cases possible with the combined power of the Wipro OTNxT platform and Intel Edge Platform include assembly line automation, fleet management, cluster management, and application orchestration.

Many businesses turn to private #5G network solutions that can provide coverage, #data control, and costs they need to support data-intensive applications at the #NetworkEdge. Via @insightdottech

Also at the event was global computing intelligence company Lenovo, which shared how Intel’s Edge Platform with modular building blocks seamlessly integrated with Lenovo Open Cloud Automation and Lenovo xClarity for enhanced automation and manageability. The platform enables developers to build, deploy, run, manage, connect, and secure distributed edge infrastructure, applications, and edge AI. The platform is a horizontal approach to scaling needed infrastructure for the Intelligent Edge and Hybrid AI, as well as bringing together an ecosystem of Intel and third-party vertical applications.

“The integrated solution delivers a seamless experience combining truly edge-native capabilities for security, near zero-touch provisioning and management, with Intel and Lenovo’s deep industry experience and unrivaled ecosystems. And, with built-in OpenVINO runtime, it enables businesses to adapt edge and hybrid AI solutions across industry verticals,” says Charles Ferland, Vice President and General Manager of ThinkEdge and Communication Service Providers for Lenovo.

Sustainable Private Network Solutions with Network Edge Partners

Elsewhere on the show floor, Nokia, a technology leader across mobile, fixed, and cloud networks, showcased how it leveraged 4th Gen Intel® Xeon® Scalable processors with Intel® vRAN Boost and the Intel® FlexRAN reference architecture to create the optimized form factor for private 5G solutions. This compact solution accelerates deployments across vertical markets while lowering capital expenditures (CapEx) and operating expenditures (OpEx).

IT provider Cisco was also at the event demonstrating how to deploy a private 5G solution for live video production. In its demonstration, it showcased how commercial off-the-shelf (COTS) platforms based on Intel technology can be used to live-broadcast a large sporting event.

Additional Intel partners at the event highlighting 5G capabilities included mission-critical intelligent systems software provider Wind River, telecom network solution provider Rakuten Symphony, IT technology company Supermicro, and 5G platform software-based company JMA Wireless.

To see what else you missed from the event and ecosystem partners, head over to the MWC Barcelona 2024 website.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.