What is "Explainable AI (XAI)" required for security tech today?

#
Products

Asilla, an AI security system for facilities, is now equipped with an accountability reporting function.

Explainable AI (eXplainable AI : XAI) is a generic term for a series of processes and methods that enable humans to logically understand and trust the inference results output by AI models built using machine learning algorithms, including deep learning.

Explainable AI (XAI) is a generic term for processes and methods that are white boxed.

By making these inference results explainable, which were previously considered a black box (left side of the above figure), fairness, transparency, and reliability in AI-based decision making can be dramatically improved (right side of the above figure). Hence, there is a growing focus on "explainable AI," especially in the healthcare, financial, and mobility industries.

In the security industry, software developed by a Russian government agency that uses vibrations to determine a person's mental state and detect suspicious persons has long been used in the Olympics and international summits. Of course, there is no problem if certain results are achieved, but I have heard that it is a difficult product to evaluate because the basis for the inference results is unclear.

On the other hand, we at Asilla, after careful consideration of the accountability of security products and their business value from the perspective of facility operators, launched Asilla, an AI security system for facilities, in the market at the end of January 2022. Since then, 16 companies have introduced the system in two months, and the number of cameras converted to AI by Asilla has reached 70.

This AI product can be added to existing security camera systems to turn equipment that has been ridiculed for simply recording into "AI eyes," which can instantly detect and identify abnormal or suspicious behavior, leading to the prevention of crimes and the speeding up of emergency and first-aid calls, and has been well received by facility owners, facility managers, and social infrastructure operators. The system has been well received by facility owners, facility managers, and social infrastructure operators.

This product logically incorporates the process of questioning by the cooperation of advisor Saito, a former police officer*1, and can explain the inference results of abnormal or uncomfortable behavior by saying, "The reason is because he is fat, fat, and fat. This functionality is intended to improve the quality level of the product. This function is used internally to improve the quality level of products, and is scheduled to be implemented as an XAI report function (tentative name) within this year.

It is highly likely that the "Artificial Intelligence Act," which is currently being legislated by the EU, will include a requirement to explain AI, and in Japan, the Ministry of Internal Affairs and Communications and the Cabinet Office have issued guidelines for the use of AI. These developments are thought to indicate the need for XAI, and XAI may become mandatory in the future.

In addition, Been Kim, an AI researcher at Google, stated, "I view interpretability as ultimately enabling a conversation between machines and humans"※4 "XAI5 is essential for the coexistence of humans and machines. 4 "XAI5 is indispensable for the coexistence of humans and machines. We at Asilla agree with these words and will continue to pursue the possibilities of XAI based on the belief that XAI can create high business value by enabling efficient collaboration between humans and AI.

*1) Former Saitama Prefectural Police Officer Akira Saito Appointed as Advisor to Strengthen AI Security System
*2) Ministry of Internal Affairs and Communications, "Draft Principles for AI Utilization"
*3) Cabinet Office, "Principles for a Human-Centered AI Society."
*4) REUTERS「AI is explaining itself to humans. And it's paying off
*5) Interpretability to be exact

■ Corporation
Representative:Daisuke Kimura, Representative Director and CEO
Location:1-4-2 Nakamachi, Machida-shi, Tokyo
Capital:30 million yen
Business:Development and sales of AI security system "AI Security asilla"
Official HP:https://jp.asilla.com/

Asilla complies with the following guidelines regarding personal information and privacy in security camera images.

AI Charter:https://jp.asilla.com/ai-charter
Information Security Policy:https://jp.asilla.com/security
Privacy Policy:https://jp.asilla.com/privacypolicy
Terms of Use:https://jp.asilla.com/termsofservice

The name and logo of "Asilla" are registered trademarks of Asilla Corporation in Japan and other countries.

The names of companies and products mentioned herein are the trademarks or registered trademarks of the respective companies.

The contents of this press release, including service/product prices, specifications, contact information, and other information, are current at the time of publication. The information is subject to change without notice.

No items found.
VIEW MORE

CONTACT US

Contact Us
For inquiries about our products, please contact us.