AI (artificial intelligence) is monitoring you

The rapid development of artificial intelligence has brought great changes to people’s lives, and surveillance systems are one of them. Traditional monitoring equipment can only serve as evidence in court, and consumes a lot of labor costs. With the introduction of artificial intelligence technologies such as basic research, computing capabilities, and training data sets, the application scenarios of surveillance equipment are becoming more and more abundant. . This article takes the practical results of the IC Realtime and Boulder AI projects as examples to elaborate on the prospects, problems and thinking of AI surveillance equipment. Let’s take a look.

Surveillance cameras are generally called digital eyes, through which people can watch various video programs, watch live videos, and so on. Of course, most surveillance cameras are passive-they are placed there as a deterrent, or provide evidence when something goes wrong, such as a car stolen? Then check CCTV.

However, things have changed a lot now. AI technology solutions enable surveillance cameras to match digital brains, allowing them to analyze live video on their own, without the help of humans. This can lead to enhanced public safety, police support, along with finding crimes and accidents more easily.

But this also raises serious considerations for future privacy issues and brings new risks to social justice.

Imagine when the government can track the information of a large number of users watching closed-circuit television, when the police can digitally follow your city by uploading your photos to the database, or when the police do not like the expressions of a particular group of people and are in the local area. When running biased algorithms on the cameras in the mall.. what happens in these situations ?

AI surveillance starts from searching for videos

Although the realization of the above scenarios still has a way to go, we have seen the first batch of application results that combine artificial intelligence and surveillance. IC Realtime is an example. In December of last year, IC Realtime’s flagship product was hailed as Google in the field of closed-circuit television once it was launched. This is an application and web platform called Ella that uses AI to analyze what is happening in the video feed and can search instantly. Ella can recognize thousands of natural language queries and provide users with a large number of search materials, such as finding a specific clip video, people wearing a certain color of clothing, and even the make and model of a personal car.

In a network demonstration, IC Realtime CEO Matt Sailor demonstrated the function of Ella. It connected with surveillance cameras in about 40 industrial parks and performed “a man in red”, “UPS truck” and “police car”. “Wait for various search attempts-all of which returned relevant shots within a few seconds.

“If there is a robbery, but you haven’t realized what happened,” Sailor said, “but at this time a jeep happened to pass by, then we search for “jeeps” in Ella, and the screen will show different jeeps passing by. Scenario-this will be the number one advantage of the combination of artificial intelligence and closed-circuit television.” Ella runs on Google Cloud and can search for footage from almost any CCTV system. Sailor thinks,The application of this technology allows you to easily find what you are looking for without the need to sift through hours of video there.

IC Realtime has achieved great success in the “smart” home security camera market produced by companies such as Amazon, Logitech, Netgear, and Google’s Nest.

Use Ella to search for people in red

“Our current recognition accuracy rate in Idaho is as high as 100%”

Although IC Realtime provides cloud-based analysis functions that can upgrade existing point-and-shoot cameras, other companies are more adopting artificial intelligence to be built directly into the hardware-a major advantage of integrating AI into devices is that it does not require an Internet connection to work. Boulder AI is such a startup. It uses its own independent AI camera to sell “vision as a service” and can tailor machine vision systems for individual customers.

Its founder Darren Odom gave an example of a client who built a dam in Idaho. In order to comply with environmental regulations, they are monitoring the number of fish moving. Odom said: “At first they were sitting on the window sill watching how many trouts swim by. Later they switched to surveillance video and the person in charge came to observe it remotely.” Finally, they contacted Boulder AI to help them set up A customized artificial intelligence closed-circuit television system to identify fish. Odom proudly said: “We did use computer vision to determine the identification of fish, and the accuracy of trout identification in Idaho is now as high as 100% .”

IC Realtime represents the general terminal market , and Boulder AI represents the contractor market. However, in both cases, what these companies currently provide is only the tip of the iceberg. Like machine learning solutions to make rapid progress as in the ability to perceive things, creating the basis of artificial intelligence research, computing power and data sets such as training a key component are in place , analyzing scenes, events and action skills are also expected to increase rapidly.

The two major data sets of video analysis are produced by YouTube and Facebook, and they also hope that artificial intelligence can help them optimize more content on the platform. For example, YouTube’s data set contains more than 450,000 hours of tagged videos, hoping to stimulate “innovation and progress in video understanding.” Top brands like Google, MIT, IBM, all are involved in similar projects.

Currently, IC Realtime is already developing advanced tools such as facial recognition, and Boulder AI is also exploring this advanced analysis technology.

The biggest obstacle is still: low-resolution video

For experts in surveillance and artificial intelligence, the introduction of these functions has some potential problems in both technical and ethical aspects. Like AI, these two categories are intertwined-it is a technical problem, machines cannot understand the world like humans, but when we let them make decisions for us, it becomes a moral problem.

Although artificial intelligence has made tremendous progress in the field of video surveillance in recent years, there are still fundamental challenges in computer understanding of video. For example, a trained neural network can analyze human behavior in videos. This work is done by subdividing the human body into multiple parts: arms, legs, shoulders, heads, etc., and then observing these small characters from The change from one frame of video to another. From this perspective, AI can tell you whether someone is running — but it depends on the video resolution you have.

This is a big problem for closed-circuit systems, because the camera is often turned dark at a very strange angle. Take the camera of a convenience store as an example. The purpose of this convenience store is a cash register, but it also ignores the windows facing the street. If there is a robbery outside and the camera is blocked, then the AI ​​will be useless.

Similarly, although artificial intelligence can well identify related events in the video, it still cannot extract important upper and lower scenes. Take a neural network that analyzes human behavior as an example. You may see the camera showing “This person is running”, but it cannot tell you whether they are really running because they may have missed a certain bus or because they have stolen. Call someone’s phone.

These accuracy issues are worthy of our reflection. The insight of computers and humans is far from what we see in the video. However, technological progress has been rapid. Use license plates to track vehicles, identify items such as cars and clothing, and automatically track a person among multiple cameras-these recognition scenarios are already very stable.

There are still many AI surveillance problems to be solved urgently. However, even these very basic scene applications can produce powerful effects. As the software gets better, the system will collect more data to help the software become better. If we use old videos to train artificial intelligence surveillance systems, such as cameras from closed-circuit televisions or police agencies, then the prejudices that exist in society are likely to continue.

ACLU Senior Policy Analyst Jay Stanley said that even if we can address the prejudices in these automated systems, we cannot make them benign, because changing CCTV cameras from passive to active may have a huge negative impact on civil society .

“We hope that people are not only free, but also more free.”

We want people not only to be free, but also to feel more at ease. This means that they don’t have to worry about the unknown and invisible “audience” to interpret or distort each of their actions and words at will-it will not bring negative consequences to their lives.

In addition, false alarms monitored by artificial intelligence may also intensify conflicts between law enforcement agencies and the public. For example, in the Denver Razor shooting incident, a police officer regarded the pill gun used for pest control as a razor, which resulted in manslaughter. If a person can make such a mistake, how can a computer avoid it? Moreover, if the monitoring system becomes automated, then such errors will only become more common.

When AI supervision becomes more and more popular, who will manage these algorithms?

In fact, what we see in this field is only part of the trend of artificial intelligence applications. In this trend, we use these relatively crude tools to try and classify people, but the accuracy of the results is questionable.

“What makes me uneasy is that many of these systems are being applied to our core infrastructure, but there is no democratic procedure to measure the effectiveness and accuracy of the problem.” Algorithms can indeed provide information based on embedded culture and history. The type of pattern recognition extracted from the biased data inevitably brings about the problem of abuse of artificial intelligence monitoring.

For this question, IC Realtime gave an answer commonly seen in the technology industry: these technologies are value-neutral, because any new technology may fall into the danger of criminals. This is unavoidable, but it brings The value is definitely higher than its shortcomings. Similarly Artificial intelligence in banking, Artificial intelligence in retail, Artificial intelligence in healthcare have many applications.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
NuAIg.ai

NuAIg assists you across Artificial Intelligence value chain from DATA management, Data curation, Integration & labelling, knowledge graph generation.