"Realtime Multi-Person Pose Estimation" is an inference model that identifies person pose information from images and videos of people, based on the two-dimensional or three-dimensional coordinates of each joint point in multiple persons.
Detecting joint points is extremely difficult due to the occlusion caused by overlapping people and the extremely wide variety of clothing and possessions, and is classified as one of the most difficult inference models in deep learning.
Here, we would like to introduce the superiority of our product and its future prospects based on the comparison results of each Pose Estimation algorithm.
Measured using Object keypoint similarity (OKS) based mAP. The same performance difference was observed using server-based GPUs such as GTX and RTX. The same performance difference can be seen using server-based GPUs such as GTX and RTX.
Note: The above graph is based on publicly disclosed information for 18 domestic and foreign companies and 4 universities, and is summarized for the 3 high-performing companies and 1 university.
■ JETSON NANO
NVIDIA's small AI edge device. With a module size of only 70 x 45 mm, a variety of commercial devices incorporating this module are currently available.
We believe that the recent trend toward smaller size and lower cost will rapidly accelerate the speed of social implementation and democratization of AI.
2020-0813_AsillaPoseV3=12.3 fps, mAP 36.5%@NVidia Jetson Nano
The higher the fps, the greater the advantages of the product, such as (1) more accurate time-series post-processing with rich information, and (2) more GPU resources can be allocated to other parallel processing.
Our product achieves approximately twice the performance of CMU Openpose. As with the aforementioned mAP, the accuracy of this product is also benchmarked on the NVIDIA Jetson Nano.
It is a measure of the ability to track an individual based on the results of posture estimation. Our MOTA is currently 60.56% (as of August 2020) and is unique in that it can achieve this at only 5fps, while other algorithms require frame rates of 25fps to 30fps.
We are working on supporting not only the aforementioned NVIDIA edge device (JETSON) but also various other edge devices. Each device has its own merits and demerits, and we aim to make it possible to select the device that matches the needs when implementing in society.
- TOSHIBA Visconti
- Xilinx Zynq UltraScale+
- Nvidia Jetson(Nano, AGX, NX, TX2)
Six of the 18 companies and four universities have publicly announced support for edge devices. Of these, we are the only company that has both Xilinx support and is preparing to support Visconti.
The above comparative data is only the result of research on publicly available information on the Internet, and we believe that there are even higher-level technologies available if we dig deeper on a global basis.
Asilla, aiming to be the world's number one in this field, will actively participate in the following global standard competitions by the fall of 2020 to increase its global recognition as part of its global strategy.
For example, the following global competitions are planned.
We also plan to compete in the following pitches this summer.
That is all.
As a VaaS company, we are currently working on 8 demonstration projects this fiscal year, and we hope that this technology will soon be implemented in society in your hands (or out of sight) and will be useful to many of you.
We look forward to your continued guidance and encouragement.