hamburger icon

Executive Summary

The artificial intelligence (AI) era is the next stage in the evolution of the video surveillance camera market. Omdia forecasts that by 2025, 64% of all network cameras shipped globally will be AI cameras. The availability of AI on cameras and in dedicated on-premises analytics devices, meeting the bandwidth challenges inherent in most installations, will push video analytics from the fringes of the market to the mainstream. Edge AI will penetrate the high and mid-range markets.

The market growth of these new AI cameras will be driven by the development of new, powerful AI processors that enable extensive edge AI inferencing to be performed on camera, or on other dedicated video analytics devices. While some intelligence functionality is currently already possible, today’s SoCs are limited by processing power and efficiency, unable to fully power the applications that the surveillance and security markets require.

The development of more powerful SoCs allows for the deployment of more accurate, reliable, and complex video analytics. These processors empower cameras and devices to perform complex AI processing. For instance, real-time processing of high-resolution video or multiple simultaneous video streams and deep learning networks. These also enable the devices’ capability to cover more RoI (region of interest), detect a greater number of objects and better identify them, or provide business analytics in addition to regular security monitoring.

An increase in the processing power that is accompanied by lower power requirements will enable end-users to increase the reliability of live analytics and the quality of after-the-fact analytics while reducing bandwidth and the reliance on central processing infrastructure. Furthermore, this can reduce privacy concerns, while increasing scalability. Perhaps more importantly, edge AI will support the development of new video analytics solutions for customers across different markets, which were not previously possible.

2212123_graph_increase_growth_height_rise_icon

By 2025, it is expected that new chips will enter the market at new price points that are going to make it possible to democratize analytics on the camera, and even penetrate the lower end of the camera market that is not using computer vision today.

Video analytics will be easier and less expensive to deploy. The cost of the servers that were needed to run AI algorithms, and the upstream bandwidth required to use cloud analytics, will be reduced.

AI analytics will become as ubiquitous as megapixel cameras are in the video surveillance market of the future. Powerful edge AI analytics will be a game changer for video surveillance systems across industry verticals – from security and Smart City, to brick-and-mortar retail and manufacturing.

The Development of AI and Security IOT

The surge in artificial intelligence adoption in the past decade has been closely linked with developments in hardware. The development of general-purpose computing on graphics processing units (GPUs) enabled this trend to begin with. The trend itself then drove the development of very powerful server GPUs and their integration into high performance computing (HPC) systems, largely focused on cloud and data center deployments. With the spread of AI applications into mobile and embedded markets, a demand emerged for hardware that could perform neural network inference at the edge, in the context of power-, price-, and area (PPA)-constrained systems-on-chip (SoC).

Desktop or server-style GPUs were largely irrelevant here, as they tend to be large, expensive, and very demanding in terms of both power draw and cooling. Even in a high-end smartphone, the SoC itself typically contributes around $70 to the bill of materials, and battery life is a key selling point. Discrete GPUs for PCs, by contrast, often cost hundreds to thousands of dollars and require their own onboard cooling fan. As a result, there was an increasing demand for neural network acceleration on smartphone SoCs.

At the same time, the increasingly advanced SoCs developed for the large smartphone market began to penetrate other markets, such as robotics, automotive, industrial automation, UAVs, and video surveillance.

2020-revenues

These devices are general-purpose computing platforms, optimized for constrained, embedded form factors. As such, using them makes a lot of sense for developers in these sectors, as they make it possible to develop more functionality in software or by training machine-learning models, and therefore to attain a greater degree of mass customization and to operate a faster development cycle.

Video surveillance technology has evolved considerably over recent decades. Analog cameras connected on dedicated closed circuits have been replaced with network security cameras capable of recognizing objects and behaviors. Video images that used to be recorded on VHS tape are now stored in the cloud and across distributed systems that allow simultaneous access from various locations and with different access rights. These networked video surveillance devices are well suited to leverage the benefits of edge AI applications.

Video Surveillance Market Evolution

vs-market-evolution

Artificial Intelligence in the Video Surveillance Market

The next stage of network camera technology is the artificial intelligence era. Primarily driven by object detection and behavior analysis, the generated metadata will enable AI to provide other functionality. Examples include image coding enhancements, big data analysis and predictive crime centers, remote maintenance assessments and image quality enhancements.

However, edge AI will be more transformational for the video surveillance market. It will push video analytics from the fringes of the market to the mainstream. Powerful analytics are now possible on the camera, meeting the bandwidth challenges inherent in most installations. It will also reduce the cost of the servers that were needed to run AI algorithms or the upstream bandwidth required to use cloud analytics.

Fundamentally, it will reduce the network taxation. Video analytics will be easier and less expensive to deploy. Perhaps more importantly, edge AI will support the development of new video analytics solutions for customers across different end-user markets. AI analytics will become as ubiquitous as megapixel cameras in the video surveillance market of the future.

The global market for professional video surveillance equipment is estimated to have grown in 2021. Despite the coronavirus pandemic, and other factors such as US-China trade tensions and supply chain constraints, the market is forecast to exceed $30 billion by 2025.

Total Video Surveilance Equipment Market by Region

tvs-region

 

The post coronavirus pandemic video surveillance market has been impacted by two main trends:

us-china-relations

US – China Relations

The trade and diplomatic relationship between China and the US has worsened. These two countries are the two largest domestic markets for video surveillance equipment. Their technology supplier and component manufacturers are key in the video surveillance production, supply chain and technological development. Bi-lateral trade tariffs, US Bureau of Industry and Security (BIS) and US Department of Defense (DoD) sanctions have increased costs and inhibited trade between the two countries.

ai-video-analytics

AI & Video Analytics

The latest deep learning powered video analytics are gaining adoption in both devices with embedded analytics and standalone analytics software licenses. In the back-end of the surveillance system, analytics are embedded in enterprise recorders and dedicated video analytics appliances.

In the short-to-medium term, greater edge processing in chipsets will add capabilities for analytics at the edge. The price premiums for these capabilities are reducing. In the longer-term, most entry level cameras and recorders will have a suite of embedded analytics. Vendors will offer “big data” platforms and there will be an increased proliferation of AI chipsets into the market. With greater use of deep learning powered video analytics, a secondary layer of AI will be integrated into these systems. AI-powered rules and decision making will be based upon visual classifications and meta data.

Embedded Video Surveillance Device Shipments by Year - World

device-shipments

The transition in the number of network cameras and recorders having no embedded analytics to devices being sold with embedded AI analytics will happen over a four-year period. By 2025, most devices will be AI enabled. In 2020, Omdia estimated that only 16% of all network cameras shipped were AI cameras, with China accounting for 94% of all AI cameras. By 2025 it is expected that a large proportion of the mid-range market will have been penetrated, with AI cameras accounting for 64% of all network cameras shipped globally. 15% of the AI camera market will be accounted for by cameras sold outside of China.

Most analytics embedded on cameras are security and safety analytics. However, this algorithm type will lose share between 2020 and 2025, albeit the market size will continue to increase significantly.

Business intelligence applications will be a driver of growth for the video analytics market as an increasing number of end-users become familiar with the potential return on investment that can be drawn from analytics data.

Some suppliers of video analytics have opted to focus solely on business intelligence and optimization analytics, creating dedicated suites of sensor types along with supporting software to turn analytics data into operational insight. With the increasing power of hardware and advances in edge AI, business intelligence applications will benefit from an expanding algorithm use case, accuracy and reliability of analytics, allowing video streams to be used for multiple purposes.

Business intelligence and optimization analytics will gain share, from around 15% of all analytics device shipments in 2020 to almost 23% in 2025.

Drivers for Edge Processing

The classic drivers of edge computing deployment are as much at work in the video surveillance industry as anywhere else.

 

scalability

Scalability

Rising AI adoption in increasingly data-intensive use cases, such as monitoring high definition video, requires more and more computing infrastructure, especially as the AI models themselves are growing very fast. The sheer number of edge devices (cameras and recorders) is an argument for moving from centralized to distributed processing. Edge AI is a significant factor in the distribution of inference workloads to the edge (cameras and devices), improving the efficiency of the video analytics solution.

scalability

Scalability 2

Rising AI adoption in increasingly data-intensive use cases, such as monitoring high definition video, requires more and more computing infrastructure, especially as the AI models themselves are growing very fast. The sheer number of edge devices (cameras and recorders) is an argument for moving from centralized to distributed processing. Edge AI is a significant factor in the distribution of inference workloads to the edge (cameras and devices), improving the efficiency of the video analytics solution.

scalability

Scalability 3

Rising AI adoption in increasingly data-intensive use cases, such as monitoring high definition video, requires more and more computing infrastructure, especially as the AI models themselves are growing very fast. The sheer number of edge devices (cameras and recorders) is an argument for moving from centralized to distributed processing. Edge AI is a significant factor in the distribution of inference workloads to the edge (cameras and devices), improving the efficiency of the video analytics solution.

Processing and Power Budgets

From an AI processor chip standpoint, the development breakthrough happened when two things were accomplished. One is to be able to process the video analytics at low power. Network cameras have a power budget. Typically, just three watts for the subsystem of SoC plus DRAM. The breakthrough on the technical side was a new generation of SoCs that enable analytics within this power budget.

Previously, it was only possible to do the same with GPUs, FPGAs, and CPUs, but the power would be prohibitively higher. This is perfect for a server, but it doesn’t fit an analytic camera. This advance was to create new architectures to be able to get the power budget low enough. In addition to this, the network camera also needs to work as a camera. It needs to be able to do image processing and video encoding, and sometimes up to 4K resolution. The second challenge from the chip design point of view, was to not only be able to do the analytics at low power, but to still keep the performance to do all the image processing, such as high dynamic range low light processing and H.265 video encoding, plus all of the video analytics, within a very low power budget.

AI Chipset Revenue by Device Processing and Power – Cameras Only

chipset revenue by device processing power

Resolution Increase and Bandwidth Management

There is an ongoing trend in the market toward higher resolution cameras. While resolution is just one aspect of video quality, the key benefit of higher-resolution cameras is sharper and more clearly defined pictures.

As with consumer video products, such as televisions, the typical resolution of video surveillance cameras continues to increase over time. However, the typical resolution of video surveillance cameras lags that of many consumer video products. For video surveillance, full HD resolution is often considered a sufficient compromise between image quality, bandwidth, storage, and cost for most indoor applications. Production of network cameras with resolution lower than full HD has reduced.

The trend will be from 2.1-megapixel (Full HD) resolution cameras accounting for the largest proportion of unit shipments now to 4–5-megapixel cameras accounting for the largest proportion in 2025. 4K (8.3 megapixel and higher) cameras could then account for the largest proportion in the longer term beyond 2030.

Increased camera resolution is a driver for more powerful edge AI. For example, if we think of a camera that is in an airport or a parking lot. If it is a 4K camera with a wide-angle lens, it can see people that are far off in the field of view. At 4K, it might have enough pixels on the target to be able to identify a face or a license plate, but at lower resolutions this will be more challenging.

surveillance cameras by resolution

Currently, a large proportion of server analytics algorithms don’t work on 4K input. Images are downscaled as the video is transferred to the server to allow bandwidth management, and then upscaled. This does not typically upscale to the full 4K image resolution.

By processing the image on the camera, the analysis can be performed on the raw 4K image, and then just the meta data will be transferred to the server for analysis. Due to this trend, Omdia estimates that the requirement for more powerful AI processing throughput will constantly increase. Consequently, camera chipsets capable of processing more than 5 TOPS will grow at a CAGR of 71% from 2020 to 2026.

cam-market

While there is currently a variety of SoCs servicing this market, these currently service the high-end camera market. By 2025, it is expected that new chips will enter the market at new price points that are going to make it possible to democratize analytics on the camera, and even penetrate the lower end of the camera market that is not using computer vision today. This market penetration will also exist in dedicated analytics devices and recorders.

Additionally, there will be an increase in the processing power, while at the same time lowering of the power requirements. This will enable even more to be done on the camera, such as allowing multi-sensor cameras to run off a single SoC. While this is already possible today, it is limited by power, functionality, and performance.

Challenges for Edge AI Processing

In the video surveillance industry, the development of deep learning analytics over the last few years has had the largest effect on the adoption of video analytics technology. Deep learning analytics provide a level of accuracy and reliability in object and behavior classification significantly greater than what can be provided by rules-based analytics.

accuracy

Accuracy

A long-held complaint levied against traditional analytics products was that these algorithms were unable to distinguish between objects and behaviors that a human being would have no problem classifying. This lack of intuition on the part of computer vision algorithms results either in missed security breaches or false alarms. The ability of deep learning algorithms to view a scene intuitively, in the same way a human viewer would (recognizing objects and patterns rather than pixel changes), means that detection accuracy increases dramatically, while false alarm rates fall considerably. This is a particular issue for live alerts where triggering automated or semi-automated actions based on these alerts is required.

Typically, in instances where an excessive number of false alarms have been triggered, an operator would either ignore the alarms or reduce the sensitivity of the algorithm. For instance, for users in the critical infrastructure sector, a missed security event due to analytics not functioning could mean safety and security for the public is jeopardized; here it may be better to investigate false alarms, rather than miss events.

As camera resolutions and image quality increase, AI analytics will achieve even high accuracy rates, but the trade off to this will be the requirement for more powerful edge devices.

accuracy

Accuracy 2

A long-held complaint levied against traditional analytics products was that these algorithms were unable to distinguish between objects and behaviors that a human being would have no problem classifying. This lack of intuition on the part of computer vision algorithms results either in missed security breaches or false alarms. The ability of deep learning algorithms to view a scene intuitively, in the same way a human viewer would (recognizing objects and patterns rather than pixel changes), means that detection accuracy increases dramatically, while false alarm rates fall considerably. This is a particular issue for live alerts where triggering automated or semi-automated actions based on these alerts is required.

Typically, in instances where an excessive number of false alarms have been triggered, an operator would either ignore the alarms or reduce the sensitivity of the algorithm. For instance, for users in the critical infrastructure sector, a missed security event due to analytics not functioning could mean safety and security for the public is jeopardized; here it may be better to investigate false alarms, rather than miss events.

As camera resolutions and image quality increase, AI analytics will achieve even high accuracy rates, but the trade off to this will be the requirement for more powerful edge devices.

accuracy

Accuracy 3

A long-held complaint levied against traditional analytics products was that these algorithms were unable to distinguish between objects and behaviors that a human being would have no problem classifying. This lack of intuition on the part of computer vision algorithms results either in missed security breaches or false alarms. The ability of deep learning algorithms to view a scene intuitively, in the same way a human viewer would (recognizing objects and patterns rather than pixel changes), means that detection accuracy increases dramatically, while false alarm rates fall considerably. This is a particular issue for live alerts where triggering automated or semi-automated actions based on these alerts is required.

Typically, in instances where an excessive number of false alarms have been triggered, an operator would either ignore the alarms or reduce the sensitivity of the algorithm. For instance, for users in the critical infrastructure sector, a missed security event due to analytics not functioning could mean safety and security for the public is jeopardized; here it may be better to investigate false alarms, rather than miss events.

As camera resolutions and image quality increase, AI analytics will achieve even high accuracy rates, but the trade off to this will be the requirement for more powerful edge devices.

In the video surveillance industry, the development of deep learning analytics over the last few years has had the largest effect on the adoption of video analytics technology. Deep learning analytics provide a level of accuracy and reliability in object and behavior classification significantly greater than what can be provided by rules-based analytics.

Omdia expects that the use of distributed computational power networks will continually increase. Edge analytics will further integrate with centrally processed analytics on recorders and servers, and even in the cloud. This will become particularly prevalent in more compute intensive algorithms, such as mass facial recognition, or widespread traffic monitoring and vehicle classification. In situations such as these, the processing of the analytic will be shared across several devices and locations. Distributed analytics solutions will provide an opportunity for powerful edge AI to work collaboratively with cloud-based analytics solutions.

However, deep learning analytics, with large complex models and high compute demands, have been a particular problem for today’s edge-based AI. As the power of the edge devices increase, and it is expected to increase considerably over the coming years, there are three main areas in which more powerful edge AI offers significant benefits over the technology that has preceded it.

Edge AI Analytics Use-Cases

$4B

Market for video surveillance equipment, standalone analytics, servers, and storage in city surveillance (2021)

city-surveillance
City Surveillance and Smart Cities

City surveillance is one of the largest end-user applications for video surveillance equipment globally, worth an estimated $4 billion in 2021. The market includes all public space within the remit of a city surveillance or public safety project.

Security cameras are used by police departments, and other government agencies, to monitor these open spaces. This can be through spot checks of key locations to ensure that there is no activity that would require a police response, or through regular “guard tours” to supplement police officers. In specific occasions, real-time monitoring of live events can also be done.

In recent years, the city surveillance market has evolved. The first integration was with communications technology and command and control (C&C) rooms to create safe city solutions. These systems connect video surveillance, analytics, entrance control, gunshot detection, and other sensor inputs into a command-and-control infrastructure. Data analytics can be layered into the system to create predictive crime centers capable of identifying patterns in behavior.

The next evolution is the Smart City. Building on the public safety aspect of safe city solutions, smart cities support mobility and transportation, governance, physical infrastructure, and energy efficiency. Video analytics solutions are well placed to provide traffic management solutions and parking violation identification as part of a broader Smart City.

Challenges with Current Solutions

Video analytics have been used in the city surveillance market to support the identification of vehicles or people, protect virtual perimeters and alert to loitering or other anti-social behavior. A key challenge for end-users has been ensuring that the alerts are accurate, timely and not repeated for the same event.

There is an example from an early installation in the UK where the local council disabled its city surveillance video analytics system because it was creating so many alerts that were false positives. It required more operators to manage these alerts than were initially staffed in the control room. The moral responsibility to respond to an alert meant that the city leaders could not continue to work with a solution where the accuracy to real events was not high.

Another challenge with existing analytics systems is the large geographic spread of security cameras around a city. In some cases, the cameras are networked using wireless infrastructure as cabling cannot be installed. Bandwidth is a real challenge, especially with the need to deploy higher megapixel cameras to meet future image quality requirements. High-end public safety analytics are often run on the server due to processing requirements, which means these high-resolution images must be sent over the network, adding cost and complexity.

Benefits of High-performance Edge AI Solutions

A critical benefit of high-performance edge AI analytics is that accurate analytics algorithms can be run on the camera, in real-time. This does two things in a city surveillance project. First, it means that any alerts generated to the control room are accurate and timely. Each network camera has the processing power required to provide strong meta-data points (such as alert type or object type) and generate alerts that are real and trusted. Trustworthy alerts provide more flexibility in operations and can save cost through more efficient use of resources or people. Analytics solutions that are not accurate will eventually be turned off by the end-user. Edge analytics are also timely as the processing capability is dedicated on the device.

Second, powerful edge AI solutions support better bandwidth management and reduce storage costs. City surveillance projects are characterized by security cameras located in remote locations around parks and streets.

artificial_brain_intelligence_technology

It can be challenging to get high-resolution images back to a central location. Running the analytics on the edge device mitigates this challenge.

It also means that every algorithm is using the highest resolution images possible which improves the overall accuracy of the solution.

There is a further advantage for high performance edge AI solutions in the form of software-defined cameras. City leaders have evolving physical security requirements. Edge devices with the processing capability to run multiple analytics, or to handle new algorithms, can provide a more flexible city surveillance solution. This is the next stage of the video analytics market where more powerful edge processing ramps up the solution capability.

Traffic management

Video analytics systems can be installed around a city to generate data on traffic flow, congestion, and vehicle numbers. This meta-data can be used to manage the roads more efficiently, ensuring traffic is re-routed where possible, and that related traffic controls consider the current road conditions. These types of solutions need accurate edge analytics on the camera.

Due to the remote locations of traffic management systems, there are bandwidth challenges in sending images back to a central location. A better solution is to run algorithms on the device and send back the information rather than the video images. However, scenes are often busy with hundreds of vehicles moving in and out of the field of view. High performance analytics are needed to manage the complexity of objects and activity alerts. Given the importance of an efficient traffic system, the metadata needs to be extremely accurate and low latency, so will benefit from increased processing power.

Traffic enforcement is another Smart City application well suited to using video analytics. Security cameras can monitor the movement of traffic for vehicles that go through a red light. Automatic number plate recognition (license plate recognition) can then identify the vehicle so the system can generate a ticket for the violation. The video surveillance images can be used to prove the event in the case of any legal dispute. Similar solutions can be used to manage city parking ensuring that vehicles that spend too long in a parking space are ticketed. As with traffic management solutions, there are significant bandwidth benefits by running these AI analytics on an edge device.

camera
Commercial

Video surveillance has been deployed in commercial buildings for decades. Security cameras protect perimeters and internal locations in hotels, offices, and restaurants around the world. The market is an important revenue stream for the video surveillance industry, accounting for over 10% of annual equipment revenues.

Commercial buildings typically have many employees, contractors and visitors accessing the site daily. They also have perimeters that need to be secured but may not be so high risk to require full manned guarding or fencing. Virtual tripwires, loitering alerts, people tracking, and object identification analytics have all been used to try to meet the physical security challenges of a commercial security manager.

Challenges with Current Solutions

The traditional analytics market has some challenges in the commercial market. In some use-cases, basic analytics are deployed in relatively complex scenes. This could include hotel and office lobbies where large numbers of people are moving around. For less powerful analytics, the alerts generated can be difficult to interpret for the physical security personnel as they may be inaccurate or delayed due to processing issues. Like the city surveillance market, inaccurate analytics will eventually be turned off.

Benefits of High-performance Edge AI Solutions

In the commercial market, one of the key benefits of adopting high-performance edge analytics is the improvement in network taxation. This means there is less cost associated with networking, servers, storage, and cloud because the analytics are done on the camera. That said, the analytics need to be powerful enough to generate good metadata. This needs processing capability.

Commercial is another market where security requirements can change. Increased processing power, aligned with higher image quality, means security cameras can do more things at once. This could mean alerting to a perimeter breach while at the same time counting the number of people that are entering a building. This additional functionality could be used to create more efficient security solutions in the future.

6.8%

The percentage of global video analytics revenues in the airport vertical market (2021)

airport-vertical-market
Airport

Transportation is an important market for the video surveillance and video analytics industry. It is also considered one of the faster growing vertical markets. Airports were one of the earliest video analytics market applications. They have higher security requirements than most industries, as well as large perimeters that need to be protected, but are difficult to protect efficiently.

They also pose a challenge in the sense that the volume of people passing through airports is high and the system must be able to handle that within accuracy and cost constraints. The security system should be able to generate a real-time alert when necessary. This can be challenging as it is common to loiter in areas while waiting for a flight departure.

Challenges with Current Solutions

Large, remote perimeters and high risk, airside locations mean airports are difficult sites to protect. The addition of thousands of passengers, retail locations, the airside-land-side border and the number of employees, contractors, and flight staff with access adds to the complexity.

Some of the early video analytics solutions deployed in airports to protect the perimeter were not that successful. In one US airport, the analytics were turned off due to the regularity of alerts. In this scenario, the alerts were not false, they were alerting to the same animal that was just beyond the perimeter fence. However, the result was that the control room operators were constantly turning the alert off which became labor intensive and not labor saving. This highlights that accuracy and flexibility of the algorithm are extremely important.

Benefits of High-performance Edge AI Solutions

Airports need high quality analytics that are accurate and reliable. Terminal buildings have thousands of passengers passing through them daily. High performance analytics are needed to ensure that the volume of objects, such as passengers and luggage, can be detected and tracked when required.

There are also extremely complex analytics needed for specific applications, such as left luggage applications. Here, the algorithm must identify that the object has been left for a predefined period, generating an alert, while removing all other activity from the scene. This is complex and requires powerful processing capability. The cost of getting this wrong is either a dangerous object is left unchecked, or a terminal building is evacuated based on a false alarm. Neither outcome is positive for the airport security operations team.

In more traditional analytics applications, such as tripwire alerts, there are benefits from improving the accuracy of the algorithm. Airports are high security environments and any threat to a border is taken seriously. False alarms can generate armed responses which are expensive. Missed alerts are significant security breaches and there can be implications for national security.

As AI processing power increases on cameras, the solution accuracy for analytics, such as tripwire, will increase. These applications have been shown to have utility in the market. Improvements in the accuracy and latency of alerts will only be of benefit to end-users. Outside of the terminal, there are huge benefits in bandwidth management by running powerful analytics at the edge.

shopping-mart
Retail (Business intelligence)

The retail industry has been using video surveillance for a long time. Originally, these systems fed video to a control room where security personnel watched for suspicious activity. Now, retail end users have embraced video analytics technology to support their loss prevention activities. These include monitoring point-of-sale (POS) activity for fraudulent or unauthorized activity or tracking potential perpetrators through a store.

While loss prevention is the primary use-case for video analytics in the retail market, there is an emerging technology opportunity for business intelligence and operational improvement solutions. Much of the same infrastructure can be used to support these business intelligence applications. The benefit for the retailer is in understanding shopping traffic patterns and customer demographics, measuring retail store trends, and increasing conversion and overall sales for the store.

Challenges with Current Solutions

One of the biggest challenges for current analytics solutions is the ability to run the multiple algorithms required to provide both loss prevention and business intelligence data. Most of the security cameras in retail were installed for loss prevention purposes. Therefore, the priority is this application. Retailers have a challenge in using existing cameras and infrastructure to support these potentially revenue generating applications in people counting, queue length management, customer heatmapping and stock management.

Benefits of High-performance Edge AI Solutions

Increasing the processing power on the camera is an important trend in the retail market. It means that better algorithms can be processed on the edge device and that more applications can be done on the same camera. This could be a mixture of loss prevention applications and business intelligence and optimization algorithms.

Existing solutions must prioritize an algorithm to run on the camera – which in retail means loss prevention. High performance edge AI solutions allow the camera to provide the same loss prevention capability, but with the addition of business intelligence analytics. Not only does this re use the existing infrastructure, but the higher processing capability means more accurate algorithms will generate better data across the different applications.

Furthermore, analytics can be used for different purposes at the same time. One example is where dwell time analytics have been deployed to both alert that a person is loitering near a product and to identify a loss prevention risk. In this solution, an alert can be generated for a member of staff to attend to the potential customer. In real deployments, this type of solution has increased sales while reducing theft. Many existing video analytics cameras in the retail market would not be able to take advantage of this approach because the camera has maximized its processing capability on a more traditional loss prevention algorithm.

engineering_gear_settings_configuration

Edge AI solutions that can meet the processing demands of multiple algorithms at once, can open retail solutions to this type of new opportunity.

This approach also supports the development of hybrid solutions. Here, powerful edge analytics can do some of the heavy lifting, such as analyzing faces or activity, before sending specific information or partial video images back to a central analytics system. This can make the overall solution quicker, more accurate or simply more efficient due to the distribution of processing capability.

Fundamentally, edge AI will take the retail video analytics market into the mainstream. It will ensure high performance analytics are on every camera, providing both loss prevention and business intelligence applications, concurrently, and in real time. It will simplify the installation and operation of these systems, ensuring loss prevention professionals can make best use of the technology. Finally, it will support the cost of deployment through the operational improvements possible from a single high performance camera providing security and business intelligence applications.