RT AI TOOLKIT

Section 2/5: Customer Risk Issues

Your organisation may be engaging customers in the development and deployment of AI Systems when your organisation uses AI Systems to enhance goods and services provided to customers or provides AI Products to customers. For the purpose of this section “AI Products” refer to goods and services provided to customers that incorporate an AI System.  

Does your organisation have an acceptable use policy in place for customers to ensure they do not use AI Products in an unlawful or unauthorised manner?

 
 
 
 
 

Does your organisation inform customers before using their personal data in your organisation’s AI Systems and/or AI Products?

 
 
 
 
 

Does your organisation obtain explicit consent from customers before using their personal data in your organisation’s AI Systems and/or AI Products?

 
 
 
 
 

Does your organisation have a written policy that explains to customers how their personal data is used in your organisation’s AI Systems and/or AI Products?

 
 
 
 
 

Does your organisation have clearly defined roles and responsibilities in the process of developing and supplying the AI Products to customers?

 
 
 
 
 

Does your organisation have clearly defined roles and responsibilities that safeguard the deployment of AI into customer facing products or services?

 
 
 
 
 

Does your organisation identify, monitor, and implement industry standards relevant to the deployment of AI into customer facing products or services?

 
 
 
 
 

Do your organisation’s contracts with customers contain adequate and appropriate clauses for allocating liabilities arising from the use of AI Products?

 
 
 
 
 

Does your organisation define and publish communication channels for customers to utilise when the AI component of products perform in a way that may cause harm or distress?

 
 
 
 
 

Does your organisation have a written policy that ensures the data upon which the AI product is built aligns with the purpose that the AI is to be used by the customer, and is tested for the existence of bias?

 
 
 
 
 

Does your organisation inform its customers (a) about the existence of the AI Systems, and (b) before using their personal data in your organisation’s AI Systems (e.g., where the AI System is used for profiling)?

 
 
 
 
 

Do your organisation’s contracts with its customers stipulate for the parties’ compliance with the Data Privacy Act and its implementing rules and regulations?

 
 
 
 
 

In the event of any data sharing between your organisation and your customer/s using AI Systems, is such data sharing covered by a data sharing agreement or any similar document?

 
 
 
 
 

Does your organisation implement mechanisms to allow the customers to question and contest automated decisions when the effect of such decisions poses significant risks to their rights and freedoms?

 
 
 
 
 

Does your organisation have the appropriate mechanisms (e.g., conduct of privacy impact assessments, integration of privacy-by-design and privacy-by-default, implementation of common industry security standards, continuous monitoring of AI Systems’ operation, creation of a dedicated AI ethics board; regular retraining and scrubbing of AI Systems; and mechanisms for human intervention) to ensure the responsible and ethical processing of personal data in the deployment of AI Systems?

 
 
 
 
 

Has your organisation explained to the customers the risks associated with data processing, the expected output of the AI Systems, the impact of these systems on data subjects, and the applicable dispute mechanisms available in questioning the data processing of AI Systems?

 
 
 
 
 

Does your organisation have policies and procedures on the exercise of data subject rights in connection with the processing of their personal data using AI Systems?

 
 
 
 
 

Question 1 of 17