securitylinkindia

Face Recognition Software Shows Improvement in Recognizing Masked Faces

A new study of face recognition technology created after the onset of the COVID-19 pandemic shows that some software developers have made demonstrable progress at recognizing masked faces. The findings, produced by the National Institute of Standards and Technology (NIST), are detailed in a new report called Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face Recognition Accuracy with Face Masks Using Post-COVID-19 Algorithms (NISTIR 8331). It is the agency’s first study that measures the performance of face recognition algorithms developed following the arrival of the pandemic. A previous report from July explored the effect of masked faces on algorithms submitted before March 2020, indicating that software available before the pandemic often had more trouble with masked faces. “Some newer algorithms from developers performed significantly better than their predecessors. In some cases, error rates decreased by as much as a factor of 10 between their pre- and post-COVID algorithms,” said NIST’s Mei Ngan, one of the study’s authors, “In the best cases, software algorithms are making errors between 2.4 and 5% of the time on masked faces, comparable to where the technology was in 2017 on non-masked photos.” The new study adds the performance of 65 newly submitted algorithms to those that were tested on masked faces in the previous round, offering cumulative results for 152 total algorithms. Developers submitted algorithms to the FRVT voluntarily, but their submissions do not indicate whether an algorithm is designed to handle face masks, or whether it is used in commercial products. Using the same set of 6.2 million images as it had previously, the team again tested the algorithms’ ability to perform ‘one-to-one’ matching, in which a photo is compared with a different photo of the same person – a function commonly used to unlock a smartphone. (The team did not test algorithms’ ability to perform ‘one-to-many’ matching – often used to find matches in a large database – but plans to do so in a later round.) And as with the July report, the images had mask shapes digitally applied, rather than showing people wearing actual masks. Some of the report’s findings include: When both the new image and the stored image are of masked faces, error rates run higher. With a couple of notable exceptions, when the face was occluded in both photos, false match rates ran 10 to 100 times higher than if the original saved image showed an uncovered face. Smartphones often use one-to-one matching for security, and it would be far more likely for a stranger to successfully unlock a phone if the saved image was of a masked person. The more of a face a mask covers, the higher the algorithm’s error rate tends to be. Continuing a trend from the July 2020 report, round mask shapes – which cover only the mouth and nose – generated fewer errors than wide ones that stretch across the cheeks, and those covering the nose generated more errors than those that did not. Mask colors affect the error rate. The new study explored the effects of two new mask colors – red and white – as well as the black and light blue masks the July study tested. While there were exceptions, the red and black masks tended to yield higher error rates than the other colors did. The research team did not investigate potential reasons for this effect. A few algorithms perform well with any combination of masked or unmasked faces. Some developers have created ‘mask-agnostic’ software that can handle images regardless of whether or not the faces are masked. The algorithms detect the difference automatically, without being told. A final significant point that the NIST research team makes also carries over from previous studies – individual algorithms differ. End users need to get to know how their chosen software performs in their own specific situations, ideally using real physical masks rather than the digital simulations the team used in the study. “It is incumbent upon the system owners to know their algorithm and their data,” Ngan said, “It will usually be informative to specifically measure accuracy of the particular algorithm on the operational image data collected with actual masks.”  

Read More

Fujitsu Strengthens Cyber-Security with AI Technology to Protect Against Deception Attacks

Fujitsu Laboratories Ltd. recently announced the development of a technology to make AI models more robust against deception attacks. The technology protects against attempts to use forged attack data to trick AI models into making a deliberate misjudgment when AI is used for sequential data consisting of multiple elements. With the use of AI technologies progressing in various fields in recent years, the risk of attacks that intentionally interfere with AI’s ability to make correct judgements represents a source of growing concern. Many suitable conventional security resistance enhancement technologies exist for media data like images and sound. Their application to sequential data such as communication logs and service usage history remains insufficient, however, because of the challenges posed by preparing simulated attack data and the loss of accuracy. To overcome these challenges, Fujitsu has developed a robustness enhancement technology for AI models applicable to sequential data. This technology automatically generates a large amount of data simulating an attack and combines it with the original training data set to improve resistance to potential deception attacks while maintaining the accuracy of judgment. By applying this technology to an AI model developed by Fujitsu to judge the necessity of countermeasures against cyber-attacks, it was confirmed that misjudgment of about 88% can be prevented in our own attack test data. Details of this technology was announced at the Computer Security Symposium 2020 held from October 26 (Monday) to October 29 (Thursday). Background I n recent years, AI has been increasingly used to analyze a vast range of data in fields as varied as medicine, social infrastructure, and agriculture. Nevertheless, the existence of security threats peculiar to AI represent a growing threat. Examples include attaching small stickers to road signs to confuse recognition systems, and intentionally trying to trick AI models with slightly changed attack data in order to prevent correct judgment. To help avoid these types of threats, an adversarial training technique has emerged in which simulated attack data created in advance is added to training data so that the AI model is not fooled when it encounters malicious actors. Previous technologies remain insufficient for dealing with the challenges posed by sequential data, however. AI has a wide range of applications for this type of data, including for detection of cyber-attacks and credit card fraud, and so a growing need exists to develop technologies that can be applied to sequential data to strengthen resistance against deception attacks. Issues One way that cyber-attacks can be detected is through the analysis of communication log data. For instance, when an attacker logs in from the first terminal to another terminal, executes written malware, and performs a series of attack operations to spread infection, an AI model can detect the attack from the communication log of such operations. However, attackers disguise attacks by mixing them between legitimate administrative operations, such as collecting server logs or applying patches, which can lead to false negatives in the AI detection model. In order to apply the adversarial training techniques to such series of data, it is necessary to automatically generate a large amount of data simulating a deception attack as training data. In the case of media data such as images, it is possible to generate simulated attack data easily without damaging the characteristics of the original data by processing the data in units of pixels that cannot be discriminated by humans. However, in the case of sequential data, it is not clear which element affects the characteristics of the original data, so if you simply process a part of the data, the characteristics of the original data may be lost (Figure 1). For example, the communication log data used to detect a cyber-attack is a series of log lines consisting of various elements such as the source of communication, the destination of communication, the account used, the execution command, and the command arguments. In addition, even if simulated attack data can be generated, when it’s used to train AI, it is necessary to be careful not to decrease the judgment accuracy for the original attack data. Newly Developed Technology Fujitsu has developed a technology that can automatically generate simulated attack data for training, which can be applied to AI models that analyze sequential data and enable training with less deterioration in the accuracy of attack detection. The features of the developed technology are as follows: Automatic generation of simulated attack data When creating simulated attack data, we first prepare the original attack data as a base and the data used for impersonation. In the case of cyber-attacks, the attacker wants to disguise malicious operations as benign operations, so the base data is the communication log data of the malicious operation, and the data used for the disguise is the communication log data of the benign operation. Next, the communication log data of benign operations used for the impersonation is analyzed by the AI model before the countermeasure, and the data with the impersonation effect which is easy to be judged as the benign operation is extracted referring to the result. This extracted data is combined with the communication log data of the base malicious operation, and it is generated as simulated attack data. Since the communication log data of the base malicious operation remains unchanged, a large amount of simulated attack data can be generated automatically without losing its original characteristics (Figure 2). Ensemble adversarial training techniques Using the original learning data set and the simulated attack data set generated with the new technique described above, two kinds of AI models are constructed – an AI model which works accurately for the original learning data and an AI model which works accurately for deception attack data (Figure 3) and the decision results of the two AI models are integrated by ensemble learning using features indicative of possible deception attack data. In the case a cyber-attack is detected, it becomes possible to use ensemble learning to automatically and appropriately train AI models to decide which AI model’s decision should be…

Read More

Honeywell Introduces Virtual Reality-Based Simulator to Optimize Training for Industrial Workers

Honeywell recently introduced an advanced industrial training solution that combines 3D immersive technology with industry-leading operator training simulation to create a collaborative learning environment for plant operators and field technicians. Honeywell’s Immersive Field Simulator is a virtual reality (VR) and mixed reality-based training tool that incorporates a digital twin of the physical plant to provide targeted, on-demand, skill-based training for workers. “Faced with increasingly complex technology and an experienced workforce nearing retirement, operators need robust technical training and development solutions that accurately depict real-world environments,” said Pramesh Maheshwari, Vice President and General Manager, Lifecycle Solutions and Services, Honeywell Process Solutions, “Traditional training approaches often fail to meet the mark when it comes to helping panel and field operators and maintenance technicians in process plants become better at their jobs. The result can be reliability issues and increased operational incidents.” The Immersive Field Simulator offers a smooth, virtual walk-through to familiarize workers with the plant. It includes avatars that represent virtual team members. The simulator’s cloud-hosted, device-agnostic platform, which incorporates flexible 3D models, grows with the user as plant operations change. The simulator is customizable to meet specific instructional needs and project team members and plant subject matter experts can easily create customized training modules. Honeywell’s Immersive Field Simulator transforms training for today’s digital-native workforce, enabling employees to learn by doing while increasing knowledge retention, minimizing situations that can result in operational downtime improving competencies across a variety of areas. “With our end-to-end solution, console and field operators can practice different operating and safety scenarios, including rare but critical situations, in a safe, simulated environment,” said Maheshwari, “This approach significantly improves upon current training tools and methods. VR-based training boosts confidence and retention while improving overall professional skills. Experience shows that students using VR can learn significantly faster than in the classroom.” Honeywell’s Competency Management program, which includes the simulator training, is built upon decades of workers’ experiences using integrated control and safety systems. Honeywell has incorporated this experience into state-of-the-art competency-based offerings that improve worker performance and safety.  

Read More

Rising to Meet the INTERPOL Digital Security Challenge

Imagine that a well-known company has been hit by a cyberattack – criminals have conducted a business email compromise (BEC) scam against the company, compromising the email of the CEO to trick an employee into making a payment of USD 100 million to an account controlled by the criminals. Now imagine you are a police officer working at the INTERPOL National Central Bureau (NCB) in your country, and you are asked to work with cybercrime investigators as well as other digital forensics examiners around the world to investigate the incident. Although this is a fictional scenario, BEC fraud is a very real crime threat which police worldwide face on an increasingly regular basis. Real-world investigation This BEC scam was the premise of the fourth INTERPOL Digital Security Challenge – where teams of experts pool their knowledge and expertise in a race against the clock to investigate a simulated real-world cybercrime incident and gather evidence to identify the perpetrators. For the first time, the event was held virtually due to the COVID-19 pandemic. During the challenge, the 100 participating cybercrime and digital forensics experts from 50 countries had to analyse infected computers and contents of the BEC email messages received by the fictional company to uncover evidence of the malware used and the email servers which had been compromised. After linking the malware to a command and control (C2) server, the teams identified clues that would help narrow down the whereabouts of the cybercriminals and takedown the server. Adding an additional layer to the scenario, the criminals filmed the police takedown using drones and compromised the personal details of the officers involved. But one of the drones was captured, so the teams conducted digital forensic examinations to gather data from the device which identified the criminals’ location. A computer seized at this location was also analysed for further information on the cybercriminals’ activities. Craig Jones, INTERPOL’s Director of Cybercrime, underscored the importance of providing hands-on experience in using the latest techniques and technological tools for investigating cybercrime. “In the ever-changing world of cybercrime, theoretical knowledge is only one component of a successful investigation,” said Mr Jones. “Practical exercises like the Digital Security Challenge, which replicate the situations investigators will face in the real world, are great opportunities to gain the critical technical capabilities necessary to follow the digital trails left by cybercriminals,” concluded Mr Jones. Cybercrime investigations are becoming more and more complex and operational exercises such as the Digital Security Challenge, which simulate some of the hurdles that investigators face every day, are vital for the development of our capacities. Public-private partnership The five-day (12-16 October) event was organized in close collaboration with private industry partners NEC Corporation and Cyber Defence Institute. Throughout the simulated investigation, virtual training sessions were conducted to develop participants’ practical knowledge on relevant topics including malware analysis, drone forensics and BEC fraud. For the first time, NEC and Cyber Defense Institute joined the Challenge. Isao Okada, General Manager said, “We strongly believe this kind of event can help attendees gain the technical capabilities required to fight the latest cyber crimes.” First held in 2016, the Digital Security Challenge helps police worldwide develop the skills necessary to tackle the latest cybercrime threats. Previous editions simulated cyber blackmail involving Bitcoin, a ransomware attack, and the hacking of ‘Internet of Things,’ or IoT, devices.  

Read More

Wisenet7 Cameras Acquired International Cybersecurity Certification ‘UL CAP’

  Global security company Hanwha Techwin has recently acquired the UL CAP (Cybersecurity Assurance Program) certification, an international cybersecurity standards, for its newly launched network video surveillance cameras equipped Wisenet7 SoC (System on Chip). UL CAP is a certification program by UL, a leading global safety science company with over 127 years of history. The program assesses network-connectable products for the potential cybersecurity issues such as security vulnerabilities of system software and the security level against security threats. Since the program evaluates not only products but also relevant software development processes and risk management capabilities, only a handful of manufacturers in the industry have received the certification so far. Hanwha Techwin is the only security company in Korea to acquire the certification Hanwha Techwin obtained the certification by meeting all of UL’s thorough evaluation criteria such as penetration test, access control, encryption, and software update. Although it is generally known that acquiring the UL CAP certification would take 8 to 10 months, it took only around three months for Hanwha Techwin. The company had been constantly working on improving its capabilities by creating in-house cybersecurity team ‘S-CERT’ to establish standardized software development processes even before applying for the program. Hanwha Techwin’s own developed Wisenet7 cameras have embedded security solutions that can keep video data secure through all stages of product design, manufacturing and actual use. As the popularity of network security cameras continues to rise, the importance of cybersecurity is also growing worldwide. In this backdrop, Hanwha Techwin believes that its acquiring the UL CAP certification will serve as an opportunity to reaffirm its position as a leader in the global market. As the program evaluates the overall system relevant to cybersecurity, the certification will help with compliance to global security policies such as the EU’s ‘General Data Protection Regulation – GDPR.’ In global markets such as the US, Europe and the Middle East, bidding for projects led by governments or organizations mostly requires the UL CAP certification. In the private sector, many customers would also check the acquisition of the certification when building video surveillance systems for sensitive facilities such as laboratories and banks. “By winning the UL CAP certification, we can now more actively promote our cybersecurity features as one of Hanwha Techwin’s strengths,” said the source at Hanwha Techwin, “We will provide the best cybersecurity features in our future products that customers can trust.”  

Read More

Vanderbilt add QR500 Reader to in-depth Access Control Portfolio

Vanderbilt recently announced the addition of ZKTeco’s QR500 readers to their access control portfolio. The QR500 reader is a new generation of intelligent access control readers. It has fast scanning speeds, high recognition rates, high compatibility, and importantly, it can be connected to ACT365 Access Control Units. “Access control QR readers offer a convenient and cost-effective method of maintaining order and flow,” began Paul McCarthy, Product Manager at Vanderbilt, “Moreover, they have proven to be easy to use, not only for system users but also for end-users. Now Vanderbilt is adding another layer to their offering by bringing the ACT365 compatible QR500 reader into our already enhanced access control portfolio.” Simple & effective QR codes are indeed known for their simplicity and effectiveness. Here’s how they work. The QR500 reader has a recognition distance of more than 50mm for QR codes, which can be generated for visitors in ACT365 and sent to them via email. So, when a user is sent a QR code from ACT365 to their smartphone, all they have to do is simply scan this QR code at a QR500 reader, and voila, they are granted access, simple as! McCarthy explained the working mechanics of the solution in more detail, “After scanning the QR code, the reader sends the unique data captured in the code to the service provider. If the data in the code links up with the QR code reader’s data, it grants access to the individual who scanned the code.” QR codes are generated directly from the ACT365 cardholder page with validity periods. This allows for a more secure and controlled environment as only a system operator can create the temporary QR Code in ACT365 and grant access to specified individuals. A Real Value-Add   Overall, think it is fair to say that during this pandemic, access control QR readers have proven their effectiveness and have become a real value-add,” continued McCarthy, “As the user is using a mobile device, the system is on hand and avoids physical contact with any devices. And of course, the issue of QR codes is electronic, thus avoiding one to one interaction with security operators. QR codes can also apply to staff as it means they can enter and exit the building to begin their work shifts in a hands-free way. The QR500 reader is a Wiegand enabled device, simple to install, and is perfect for visitor management multiple environments. These include hotels, B&Bs, sporting facilities, VIP shopping appointments, health clinic appointments, or delivery services.  

Read More

INTERPOL Report Highlights Impact of COVID-19 on Child Sexual Abuse

Under-reporting of child sexual abuse and increased sharing of child exploitation material through peer-topeer networks are among the effects of the COVID-19 pandemic according to an INTERPOL assessment. The report highlights the trends and threats in the current context compared to pre-pandemic measures, what impact these are having in the short-term, and what changes are likely to happen as COVID-19 restrictions are changed. “What the report shows is that we are seeing just the tip of a growing iceberg in terms of online child exploitation material,” said INTERPOL Secretary General Jürgen Stock, “It is important to remember that each photo and video of child sexual abuse is evidence of a real crime involving real children. Each time an image is viewed those children are re-victimized and their very real suffering is prolonged even further.” “We must do more to make sure that the officers investigating these horrific crimes have the support they need, which is where cooperation through INTERPOL plays a vital role in fighting this transnational crime,” added the INTERPOL Chief. Key environmental, social and economic factor changes due to COVID-19 which have impacted child sexual exploitation and abuse (CSEA) across the world include: Closure of schools and subsequent movement to virtual learning environments;  Increased time children spend online for entertainment, social and educational purposes;  Restriction of international travel and the repatriation of foreign nationals;  Limited access to community support services, child care and educational personnel who often play a key role in detecting and reporting cases of child sexual exploitation. With this increase in obstacles for victims to report offences or access support, there are concerns that some offending may never be reported after a substantial delay. Forums on the Darknet Other findings include increased discussions on CSEA forums on the Darknet. Sex offenders with the technical expertise to administrate forums have had more time to create new ones, whilst users have benefited from additional time online to organize their CSEA collections. Live-streaming of child sexual exploitation for payment has seen an increase in recent years, with demand likely to further increase due to travel restrictions. The supply of livestreamed CSEA material is also likely to rise as victims may be locked down with facilitators, and economic hardship increases. In areas severely affected by COVID-19, situations where parents are hospitalized and children are placed under others’ care or are uncared for, are also likely to increase the risk of abuse. An increase in self-generated material distributed on the clear net is also highlighted in the report. Delayed reporting of abuse The COVID-19 impact on policing includes: A reduction or delay in reporting of CSEA offences as normal channels are affected; A reduction in the use of the INTERPOL International Child Sexual Exploitation database by member countries;  A reduction in specialized human resources, usually addressing CSEA investigations, and diverted to other activities in relation to the pandemic; Changes in processes and efficiency due to technical constraints of working-from-home which has impacted both law enforcement and electronic service providers reporting cases to law enforcement;  Court closures leading to delays in processing cases. Recommendations for law enforcement to address the additional threats presented by the COVID-19 pandemic include: Create prevention and awareness campaigns for victims and guardians relating to the risk of CSEA online including through gaming, messaging and social media platforms. Conduct incident mapping exercises across schools to identify emerging issues relating to CSEA, for example ‘zoom bombing’ incidents.  Ensure hotlines remain open and staffed and consider additional ways for offences to be flagged such as free texting services, integrated reporting channels for children through gaming, social media and messaging services. Regularly share information related to online CSEA with INTERPOL which can support and coordinate investigations across different jurisdictions.  

Read More

Dahua AI Technology Ranked #1 In The Onera Satellite Change Detection (OSCD) Evaluation

Recently, Dahua Technology’s AI-based Remote Sensing Image Analysis Technology has obtained first place in the comprehensive precision ranking of the Onera Satellite Change Detection (OSCD) Evaluation released by Geoscience and Remote Sensing Society (GRSS). This achievement fully demonstrates Dahua Technology’s continuous development and innovation capabilities in the field of remote sensing image change detection. OSCD (Onera Satellite Change Detection) is jointly issued and maintained by the International Institute of Electronics and Electrical Engineers (IEEE), and the Image Analysis and Data Fusion Technical Committee (IADF TC). It is an international authoritative evaluation platform for remote sensing image change detection algorithm. This evaluation involves complex and variable global surface coverage data, which is extremely challenging and attracts scholars and well-known academic institutions across the globe to participate.   In view of the large size of remote sensing images and the imbalance in the types of changing areas, Dahua Technology has proposed a method of image stretching and normalization preprocessing based on multi-channel fusion in data processing, which significantly solves issues including obvious surface differences. In terms of model structure, the innovative use of the Tversky loss function optimizes the problem of category imbalance. At the same time, Dahua Technology innovatively builds multi-modality and greatly improves the precision and recall of its algorithm. The Dahua Remote Sensing Image Analysis Technology has set another evaluation record in the remote sensing image change detection data set, achieving first place in the overall ranking. Remote sensing image change detection Based on the change detection algorithm of remote sensing images, the Remote Sensing Image Change Detection Technology uses remote sensing images of different phases to obtain the dynamic change information of the land cover type in the specified area, and assigns semantic category labels to image pixels that change with time, which is widely used in ecological resources monitoring, urban construction management and other fields. In the field of ecological resources monitoring, the remote sensing image change detection algorithm can eliminate interference factors such as season and weather by comparing the remote sensing images of the same area before and after (two time phases) to obtain the spatio-temporal changes in the ecological geology of a wide area. It can be applied to acquire coverage information including periodic monitoring of water bodies, vegetation, minerals etc., providing a scientific basis for scenarios such as resource development, environmental pollution, and natural disaster assessment. In the field of urban construction management, the Remote Sensing Image Semantic Segmentation Technology can be used to automatically obtain the location, range, type and other information of the area where the nature of the land changes, achieving a city-level intelligent inspection of illegal buildings. At the same time, the Remote Sensing Image Object Detection Technology can be used to effectively extract distribution information of urban infrastructures such as sports venues, dynamically monitor the construction process of infrastructure facilities within the city, and provide effective data support for urban infrastructure auditing. In addition, the combination of high-altitude and ground monitoring data can achieve the integration of ground, air and sky monitoring coverage without dead angle, providing a comprehensive and high-precision spatial visualization for urban construction management.    

Read More

Hikvision Announces Full-year 2019 and First Quarter 2020 Financial Results

Hikvision, an IoT solutions provider with video as its core competency, has released its 2019 annual report. In 2019, Hikvision generated total revenue of RMB57.66 billion thereby achieving a year-over-year (YoY) growth of 15.69%, and net profits attributable to shareholders of the company was RMB12.41 billion, reflecting a YoY growth of 9.36%. The total overseas revenue amounted to RMB16.24 billion, with YoY growth of 14.43%, and the revenue in the domestic market (China) in 2019 was RMB 41.42 billion, with YoY growth of 16.20%.   Hikvision’s innovative businesses achieved solid growth in 2019. Particularly, the revenue of its smart home business in 2019 was RMB2.59 billion, seeing a 58.38% YoY growth, and the robotics business generated revenue of RMB813.99 million, with a YoY growth of 23.88%. In 2019, Hikvision’s R&D spending was RMB5.48 billion, which accounts for 9.51% of the total revenue, and the company has more than 19,000 R&D employees. The significant R&D investments have consolidated the company’s advantages in both hardware and software products, as well as cutting-edge technologies such as artificial intelligence, multi-dimensional perception, cloud computing, and big data. Amidst the extraordinary global circumstances in Q1 2020, Hikvision has also disclosed its Q1 2020 financial results, with revenue of RMB9.43 billion, representing a YoY decrease of 5.17%. Net profit attributable to shareholders of the company was RMB 1.50 billion, representing a YoY decrease of 2.59%. Despite the emergence of new uncertainties in the external environment in 2020, Hikvision will proactively improve operating efficiency and manage risks to overcome the challenges and maintain steady development. Meanwhile, Hikvision will continuously provide advanced products and solutions through innovative technologies to help increase safety, efficiency and sustainability of communities and societies.  

Read More

Prama Hikvision Adds Innovative Features of Temperature Screening and Mask Wearing Alert to MinMoe Face Recognition Terminal

Prama Hikvision, an IoT solutions provider with video as its core competency, has added innovative features like temperature screening function and mask wearing alert to its ground-breaking MinMoe face recognition terminal product range. Hikvision’s face recognition terminals are embedded with deep-learning algorithms for access control with value added features. Hikvision’s DS-K1T607TEF MinMoe face recognition terminal is a kind of access control device integrated with temperature screening function. It can fast take skin-surface temperature and upload abnormal temperature event to the center, which can be widely applied in multiple scenarios such as enterprises, stations, dwellings, factories, schools, campus and so on. Hikvision’s DS-K1T607TEF MinMoe face recognition terminal has special features related to temperature screening and face mask wearing alert. This face recognition terminal supports Vanadium Oxide uncooled sensor to measure target’s temperature. Temperature measuring range: 30°C to 45°C (86°F to 113°F), accuracy: ± 0.5°C without black body calibration, recognition distance: 0.3 to 2 m. Fast temperature measurement mode: Detects face and takes skin-surface temperature without identity authentication. Multiple authentication modes: Card and temperature, face and temperature, card and face and temperature modes etc. Face mask wearing alert: If the recognizing face does not wear a mask, the device will prompt a voice reminder. At the same time, the authentication or attendance is valid. This face recognition terminal displays temperature measurement results on the authentication page, triggers voice prompt when detecting temperature above normal levels. Other features include: Configurable door status (open/ close) when detecting abnormal temperature. Transmits online and offline temperature information to the client software via TCP/IP communication and saves the data on the client software. Face recognition duration <0.2s/ user; face recognition accuracy rate ≥99% 50,000 face capacity, 50,000 card capacity and 100,000 event capacity Suggested height for face recognition: between 1.4m and 1.9m Supports 6 attendance status, including check in, check out, break in, break out, overtime in, overtime out  

Read More