ABREVIATIONS
AI > Artificial Intelligence
ALTAI > The Assessment List for Trustworthy AI
CCTV > Closed-Circuit Television (video surveillance camera)
CJ > Court of Justice
DPA > Data Protection Authority
DPIA > Data Protection Impact Assessment
DPO > Data Protection Officer
EGTAI > Ethics Guidelines for Trustworthy AI
EU > European Union
FRIA > Fundamental Rights Impact Assessment
GDPR > General Data Protection Regulation
LED > Law Enforcement Directive
WDT > Weapon Detection Tool
The CORE PURPOSE OF THE WDT TOOL
The Weapon Detection Tool (WDT) is an artificial intelligence (AI) system that protects private and public spaces from threats related to crime and terrorism. The AiLert Inc. company (Company) offers the WDT tool (also known as “SAMSON”) that, after being integrated with the Closed-Circuit Television (CCTV) cameras installed and operated by the clients (private or public entities, end users) enables an autonomous algorithmic analysis of CCTV camera footage in search of possible weapon detections (for more on the Company, see Table 2, 3 and 4, below). The WDT tool provides output in a form of an emergency alert (AI system recommendation) that needs to be evaluated and validated by a human operator in the alarm center. The alarm center is a service provided by the clients. Should the Company be asked by a client to assist the client in procuring such a service, the Company will aid the client in contracting third-party solutions.
The positive impacts of the WDT tool are primarily measured in terms of the enhanced security layer such a system provides to the clients and society at large: protection of premises, protection of people, protection of property, early response enhancement (involvement of law enforcement bodies and other public entities responsible for public safety and security), aid to law enforcement bodies, situational awareness at client’s premises, post hoc crime investigation, protection of public spaces, anti-terrorism tool, and others. The Company strongly believes in public-private cooperation, both on a contractual and voluntary basis, and strives to play a role in assisting any public (or publicly contracted private) entities in charge of public safety and security, thus upholding not only a common obligation to protect public safety and security but actively engaging in protecting and enhancing the fundamental EU values.
The WDT tool does not perform any mode of surveillance nor contains any form of biometric identification capacities. The WDT tool and the Company in general have no means to identify individuals, track individuals, or correspond with individuals, nor is the Company interested in offering or maintaining such services.
The WDT tool is trained (AI machine learning) to recognize objects that can be categorized as firearms (small magazine-fed handguns, rifles, assault rifles, revolvers, other small firearms that can be used as weapons, and similar) and coded to send out alerts only when it assumes that such objects have been recognized in the compressed raw camera footage. Such alerts represent an AI output that requires human evaluation and validation. As noted earlier, the Company has no capacity for using the collected data to perform additional surveillance, biometrics identification, and other similar data analysis, nor does it contract for such services with third-party vendors.
The Company is established in the United States (Delaware), but effectively provides the full service through its subsidiary (legal, organizational, and operational center, where all relevant personnel and equipment are located) in Israel, third country considered a safe country[1]
regarding data transfers from the European Union (EU) to third countries (adequacy criteria). The Company not only continuously endeavors to be fully compliant with the relevant EU law (primarily General Data Protection Regulation (GDPR) and Law Enforcement Directive (LED Directive)) but additionally makes its best efforts to ensure that its clients satisfy the same track record. The Company promotes using Standard Contractual Clauses, provided the clients are interested in such a contractual arrangement.
The Company is fully devoted to democratic values and the protection of democratic processes and deliberations, and endeavors to align its operation with relevant technical standards (such as ISO and IEEE) and relevant ethical and legal recommendations (such as Ethics Guidelines for Trustworthy AI (EGTAI), and The Assessment List for Trustworthy AI (ALTAI)). Equally so, and whenever possible, the Company strives to extend the reach of fundamental EU values to a wide range of its clients, many of whom are established outside the EU. In effect, the Company plays a role in the global outreach of the GDPR as a mechanism aimed at protecting personal data on a global scale. The Company has made great efforts to incorporate data protection and data privacy principles and concepts into the design of the WDT tool itself (Privacy by Design and Default concepts) and strives to continuously educate its employees (and clients) on the meaning, relevance, and necessity of personal data and privacy protection.
The Company acts as a data processor and relies completely on the data provided by its clients (clients act as data controllers). The Company continuously creates its own training material and data sets utilized for data training. Exceptionally, the client’s data (compressed raw footage) is used only in cases of false positive detections (“false alert” scenario) to improve the WDT tool’s performance quality by retraining the algorithm (data retraining, data learning).
The WDT tool (as detailed in Table 1, below) is regularly monitored, updated, and retrained, ensuring constant care as to its trustworthiness, effectiveness, and resilience in preventing any serious societal impact on individuals and society. The Company had implemented a record-keeping system for all relevant processes (automatic and human-originated), a risk management system, and personal data protection system. The Company regularly evaluates and, when necessary, updates its Data, Privacy, and (Fundamental Rights) Privacy Impact Assessment contained within the WDT Privacy Policy. The Company maintains a close inter-relationship with its clients that includes inter-connected and extra-connected communication (communication procedure), especially regarding the end-users’ and other third parties’ (i.e., law enforcement bodies, private security companies) comments regarding the accuracy, usability, and effectiveness of WDT tool.
[1] 2011/61/EU: Commission Decision of 31 January 2011 pursuant to Directive 95/46/EC of the European Parliament and of the Council on the adequate protection of personal data by the State of Israel with regard to automated processing of personal data (notified under document C(2011) 332), OJ L 27, 1.2.2011, p. 39–42.
KEY TAKEAWAYS ON THE WDT TOOL
The WDT tool is trained (AI machine learning) to recognize objects that can be categorized as firearms (small magazine-fed handguns, rifles, assault rifles, revolvers, other small firearms that can be used as weapons, and similar) and coded to send out alerts only when it assumes that such objects have been recognized in the compressed raw camera footage. Such alerts are an AI output (recommendation, not decision) that requires human evaluation and validation (human-in-the-loop principle).
The positive impacts of the WDT tool are primarily measured in terms of the enhanced security layer such a system provides to the clients and society at large: protection of premises, protection of people, protection of property, early response enhancement (involvement of law enforcement bodies and other public entities responsible for public safety and security), aid to law enforcement bodies, situational awareness at client’s premises, post hoc crime investigation, protection of public spaces, anti-terrorism tool, and others. The Company strongly believes in public-private cooperation, both on a contractual and voluntary basis, and strives to play a role in assisting any public (or publicly contracted private) entities in charge of public safety and security, thus upholding not only a common obligation to protect public safety and security but actively engaging in protecting and enhancing the fundamental EU values.
The AiLert Inc. company (Company) offers the WDT tool (also known as “SAMSON”) that, after being integrated with the Closed-Circuit Television (CCTV) cameras installed and operated by the clients (private or public entities, end users) enables an autonomous algorithmic analysis of CCTV camera footage in search of possible weapon detections. The Company acts as a data processor and relies completely on the data provided by its clients (clients act as data controllers).
The WDT tool provides output in a form of an emergency alert (AI system recommendation) that needs to be evaluated and validated by a human operator in the alarm center. The alarm center is a service provided by the clients. The Company continuously creates its own training material and data sets utilized for data training. Exceptionally, the client’s data (compressed raw footage) is used only in cases of false positive detections (“false alert” scenario) to improve the WDT tool’s performance quality by retraining the algorithm (data retraining, data learning).
The Company not only continuously endeavors to be fully compliant with the relevant EU law (primarily General Data Protection Regulation (GDPR) and Law Enforcement Directive (LED Directive)) but additionally makes its best efforts to ensure that its clients satisfy the same track record. The Company promotes using Standard Contractual Clauses, provided the clients are interested in such a contractual arrangement. The Company is fully devoted to democratic values and the protection of democratic processes and deliberations, and endeavors to align its operation with relevant technical standards (such as ISO and IEEE) and relevant ethical and legal recommendations (such as Ethics Guidelines for Trustworthy AI (EGTAI), and The Assessment List for Trustworthy AI (ALTAI)). Equally so, and whenever possible, the Company strives to extend the reach of fundamental EU values to a wide range of its clients, many of whom are established outside the EU. In effect, the Company plays a role in the global outreach of the GDPR as a mechanism aimed at protecting personal data on a global scale.
AILERT INC IS
AiLert will ensure:
AILERT EMBRACES THE ETHICS GUIDELINES
FOR TRUSTWORTHY AI BY RELYING ON:
DATA STORAGE AND ACCESS
The storage of analyzed data that was classified as a positive detection is a question to be decided on and ultimately resolved by the client. Retaining, utilizing, storing, accessing, and deleting captured data and data utilized for retraining is subject to clear policy stipulations issued separately by the Company and clients. The leading principle of data utilization is based on data minimization, whereby only the relevant data is processed to the extent necessary to fulfill the data processing purpose. If the client and Company agree that the Company should aid the client in acquiring additional services for data transfer, storage, access, and deletion, the Company will aid the client in contracting third-party solutions.
Only individual employees on the Company’s side have access to compressed raw footage data flagged as false positive detection footage. Such individuals are nominated, trained in personal data protection, and are all made to sign the necessary NDAs.
The Data Protection Management System will contain records on when data has been pseudonymized (implemented in the Record Keeping System), provided that the clients did not opt-out from the use of pseudonymization, as discussed above. The system also contains detailed records on when the WDT outputs have triggered an alert requiring a human operator to confirm positive detection and de-pseudonymize the compressed raw footage.
FUNDAMENTAL RIGHTS IMPACT ASSESSMENT
The WDT tool does not negatively discriminate against people and groups. The algorithm training contains a variety of artificially generated objects and people, as well as the Company’s self-produced CCTV footage of real people and objects. The Company constantly improves the data training set(s) with additional (AI) generated materials to diversify the object and people recognition database. The recognition capacity is not oriented on personal data (of any nature) but rather focused on classifying general groups of recognizable categories (i.e., a person, a mobile phone, a hand weapon, etc.).
The Company continuously works on the improvement of algorithm training data set(s) (technical accuracy), monitors the WDT tool’s success rate (positive detections), and improves the algorithm when false detections are registered (algorithm data retraining). The system itself implements the traceability system allowing the self-assessment (and, when necessary, external independent audit; audibility) and recording segments of algorithm training. Each WDT tool version update and the performance in the aftermath can be tracked to specific improvements in the code. A lot of effort is continuously placed on enhancing the diversity of learning materials, a good example being the Company’s use of generative AI software that, in combination with camera footage done on the Company’s premises and with the Company’s employees, allows for the creation of artificially generated persons of all genders, colors, racial and social backgrounds, and similar, that are used as models to train the algorithm. Since the WDT tool primarily focuses on recognizing a certain class of objects (different kinds of weapons), in principle, none of the above-specified characteristics necessarily play an important role regarding specific object recognition. Therefore, bias arising out of discrimination is not expected.
DATA PROTECTION IMPACT ASSESSMENT (DPIA)
The only instance where the Company, specifically the members of the “AI team”[1] among all Company’s employees, have access to compressed raw footage containing personal data is the so-called “false alert” scenario. The false alert scenario refers to a situation whereby a human security operator deployed by a client reviews an automatic emergency alert issued by the WDT tool and flags the alert as a false positive detection. False positive detection refers to situations where the WDT tool produced an alert for a non-threatening situation. Such a scenario is defined by three guiding principles: the “human-in-the-loop” principle, the “data minimization” principle, and the “necessity and proportionality” principle.
The human-in-the-loop approach to AI system deployment means that the WDT tool is only empowered to produce automatic recommendations (AI output). Each WDT tool output (alert) must be evaluated and validated by a human agent (human security operator in the alarm center deployed by a client).
When a human operator reviews the alert footage where the WDT tool has suggested a weapon is visible and finds that suggestion incorrect (i.e., a person is holding a mobile phone, not a handgun) or expected (an armed security agent who is contractually obligated by the client to carry weapons for physical protection), the human operator will flag that particular AI recommendation as a “not an emergency” detection (false positive detection, non-threatening detection). Therefore, the WDT tool is based on the prevailing ethical principles of “human agency” and “human oversight” and does not allow any autonomous AI decision-making that would autonomously influence any person in any way. Any output derived from the autonomous AI data analysis that can potentially make a societal impact (on an individual or society at large) must be validated and cleared by a human operator.
In addition, a human operator can access system records and conduct real-time and post hoc oversight of AI systems and their performance, including data analysis and alert recommendations. This is particularly important, considering that positive detection can significantly impact clients’ security and the security of other persons on the premises. Given the nature of the WDT tool, the positive detections will have additional consequences regarding the operation of law enforcement agencies, private security companies, and other relevant public entities. Given the potential impact (prevention of crime, prevention of threat realization in a public space, protection of people, protection of property, potential wrongdoers’ deterrence, and others), the WDT tool is not empowered to make any autonomous decision-making. It is necessarily subjugated to the human-in-command (principle) mode of operation whereby each and any recommendation must be approved or denied by a human operator.
The Company does not provide the security alarm operator center service, partially due to its remote mode of operation but predominantly due to the nature of the threat and the necessity that any decision on that threat, especially regarding the response to such threat, must be made locally and be subjected to local (domestic) rules and regulations. The Company is ready to assist its clients in training the human security operator to understand the recommendations given by the WDT tool (AI explainability). The Company is constantly improving its internal object recognition database, allowing human security operators to better understand the context of recommendations and outputs. Each time the algorithm is retrained, the Company updates its clients regarding the relevant changes in the WDT tool’s performance (communication with end users).
The database in question, as noted previously, is primarily based on materials and videos prepared entirely by the Company itself. In the case of a false alert scenario, the relevant section of compressed raw footage (5 to 7 seconds long on average) is made available to the Company for algorithm retraining (human-on-the-loop principle). Due to the fact that such videos contain instances where the algorithm erred in object recognition, in principle the Company will need to assess all relevant characteristics of the footage in order to properly understand why the algorithm incorrectly flagged an object as a weapon. This means that a Company employee, more precisely – the AI team, has access to the relevant segment of compressed raw footage that is not pseudonymized. Although no personal data is relevant or needed for the algorithm (re-)training, the relevant object falsely classified as a weapon is positioned in the same frame containing certain personal data. For example, a person is holding a hand in front of their face, and there is a certain object in the hand. To inspect and correctly classify the object in hand and to train the algorithm to better recognize such an object for future use, it may be possible that the data officer will see the face or other identifiable data related to a person holding such an object.
Since data analysis potentially contains personal data, data (re)-training is based on the data minimization principle. When possible, the frame focused on the object in hand, relevant for data training, will not contain personal data. When this is impossible, the data officer will only retain access to the relevant compressed raw footage that is not pseudonymized for the time necessary to retrain data in that segment. As noted earlier, Company deletes the compressed raw footage previously made available for retraining purposes within 90 days from the alert. In practice, the deletion from the cloud (if supplied by the Company) and Company’s internal servers is done within days (relevant for cloud storage, in order to reduce the occupying storage) and weeks (relevant of internal servers, due to a need to revisit the training materials for potential data retraining corrections) following the start of retraining. The record keeping system contains logs on data deletion. It should be noted that due to a general need to re-utilize the training data, the false positive detection videos are, as a matter of practice, downloaded from the cloud to the internal servers located at Company’s premises.
During the data retraining, no other person except the AI Team within or outside the Company can access the compressed raw footage. The hardware and software utilized for data training is held in a secure physical environment at Company’s premises. It has stringent security and cybersecurity mechanisms implemented into its core operation (physical and cyber security by design and default). The risk management system (relevant for the physical and cyber security) ensures that data transfer, utilization, storage, removal, and record keeping are done per the relevant technical, legal, and ethical standards.
All access to the compressed raw footage, including all individual access points, is recorded (continuous logging and record-keeping system), and records are kept for the duration of at least six months. The data officers are continuously trained in data privacy issues and have signed the NDA concerning the data training that contains personal data analysis. Therefore, the potential access to personal data is defined by the principles of necessity and proportionality. Per the mentioned principles, the data officer will only access personal data in instances where this is necessary to understand the false positive scenario better and be able to retrain the algorithm to avoid repeating the same false categorization of objects. Additionally, the members of the AI team will access personal data only to the extent that the processing of personal data is proportional to the purpose of data processing. The proportionality principle refers to situations where personal data is incidentally present in a particular frame containing relevant information regarding the class of an object falsely categorized as a weapon. In such a case, personal data is irrelevant for retraining, and all footage containing personal data is deleted after the retraining. Considering that any training data received by the client or derived from alert recordings is permanently deleted after it has been used for retraining, the right to be forgotten and the right to deletion have been implemented into the systems as a design mechanism (privacy by design mechanism).
It should again be highlighted that this is the only segment where the Company has direct access to the compressed raw footage. Such access is incidental and only to the extent necessary to improve the WDT tool’s algorithm and effectiveness. In all other cases, the Company and its employees have no access to personal data in the footage received from the clients, unless the clients specifically ask the Company to access the edge device (i.e., the client has no specific knowledge on how to properly access the device) and retrieve any recordings and data stored there subject to client’s stipulation. In no case does the Company or its employees have access to live footage from the CCTV surveillance cameras.
Therefore, from the Company’s perspective, any potential impact on privacy, in general, is minimized to a very low or negligible effect. Even in an undesirable cyber-attack event at Company’s premises, personal data exposure to unauthorized third-party access is always low, keeping in mind the fact that it is rarely the case that personal data is present during data processing in the Company’s premises (algorithm retraining).
Regarding the safety of personal data (safe storage), in principle, Company’s hardware does not store personal data, except for data retraining in the false alert scenario, and only when it is necessary to download the raw footage from the secure cloud server. In such a case, the compressed raw footage that may contain personal data is stored on Company’s hardware to the extent that the footage is necessary for the data training. Once the data training is completed, all compressed raw footage the client receives is deleted within 90 days from the time the alert was produced. Considering that the data training is performed in Company’s office that is not available on clients’ premises, the Company utilizes secure data transfer services such as Microsoft Azure cloud services to access false alert scenario compressed raw footage.
Regarding the automatic data processing done by the WDT tool, in principle, the Company directs the client to obtain a specialized edge device (hardware unit) on which the Company install the WDT tool. As the edge device is positioned in the client’s premises, the Company primarily relies on utilizing clients’ security, safety, and network connectivity features. The Company utilizes different security features to access the device remotely and securely (i.e., firewall, VPN remote SSH access, security access measures), but in principle, utilizes clients’ network internet and local network connections. The clients are contractually obligated to provide all necessary protection regarding the safety and security of relevant equipment and access to that equipment. The clients are the primarily responsible party for ensuring a safe and secure environment. In principle, the Company requires each client to issue a written statement confirming compliance with the GDPR or other relevant legislation (depending on the jurisdiction). The Company actively engages its clients whenever it feels that any GDPR-related goals can be improved on the clients’ premises. The Company will not engage with clients who are not ready to commit to the same values and obligations regarding data privacy (as per GDPR or other relevant legislation).
The second principal way of using the CCTV data is to train the algorithm. Data training is relevant at the pre-deployment and early deployment stages at a new location (initial data utilized for data training) and general algorithm improvement (data containing false positives, general improvements, and updates to the algorithm utilizing self-produced videos and materials). For this purpose, and with the above-noted exception of false alert scenario compressed raw footage, it is sufficient to use anonymized data that can no longer lead to identifiable information, Company’s own data sets (with internal consent given by all persons participating in the creation of such materials), and a data set containing carefully recorded footage at the client’s premises where all persons appearing in the footage have signed consent.
In a case of effective anonymization (no de-anonymization is possible), following Rec. 26 GDPR, the GDPR is no longer relevant as no personal data is being processed. The algorithm data training in question is not general data training aimed at enhancing the general capacity of the algorithm but specific algorithm data training necessary to enable the WDT tool to recognize and evaluate all relevant conditions present at the client’s premises where the CCTV system operates. Having that goal in mind, the Company is actively developing its generative AI (synthetic data generation) capabilities for use in false alert scenario algorithm retraining. The general intention behind utilizing generative AI technology is to combine the frames containing the images of objects falsely flagged as weapons, as available in the false alert scenario compressed raw footage, with synthetically generated background and 3D models of people and objects in an AI-generated environment. This would, in effect, anonymize the compressed raw footage by deleting all instances of compressed raw footage that potentially contain any personal data. As noted earlier, such data processing will no longer be subject to GDPR compliance as it will be fully anonymized.
[1] The updated list of the members of the AI team are listed in Annex 1. For inquiries, please contact us at support@ai-lert.com
PRIVACY BY DESIGN AND DEFAULT
As explained earlier, the WDT tool aims to feature a default pseudonymization technique in its design, representing the system’s major privacy by-design characteristic introduced as a privacy by-default mechanism. Each time the WDT tool issues an alert, the security operator needs to evaluate the alert. Once the mechanism is fully implemented into the WDT tool operation (and not opted-out by the client), the security operator will have an opportunity to de-pseudonymize the footage when necessary to properly evaluate and validate the alert prior to flagging, or the footage will get automatically de-pseudonymized upon a positive detection flagging. As detailed earlier, the first few seconds of the footage presented to the security operator are pseudonymized, and the security operator cannot see any information on personal data (a person’s body is removed; only the hand holding an object is framed by a bounding box). After the first two seconds of the footage, the de-pseudonymization key becomes available, and the relevant section of compressed raw footage can be accessed by the security operator on choice (it was necessary to see the de-pseudonymized footage in order to evaluate the alert) or on positive detection nomination (it was not necessary to de-pseudonymize the video in order to validate the alert). Subsequent to a positive detection, any other person nominated by the client (i.e., security company, law enforcement, etc.) will get access to the de-pseudonymized footage available on the cloud.
It should be noted that, by design, only the client and persons nominated by the client have access to such data. The data in the cloud server is secured and encrypted utilizing the security measures provided by the cloud storage service provider (such as Microsoft Azure). In cases when the Company is contractually responsible for acquiring the cloud storage service, the Company will endeavor to contract a service provider that provides a sufficient level of cybersecurity in line with the relevant EU or other law (depending on the jurisdiction).
OBTAIN THE FULL VERSION OF OUR PRIVACY POLICY
If you are a DPA, reseller, client or end user of our WDT Tool, and need to obtain a full copy of our wepaon detection privacy policy, you can contact AiLert Inc at support@ai-lert.com or call us at +1 857 444 96 64