Introduction
Artificial Intelligence (AI) tools have seen exponential adoption across various sectors in Hong Kong. According to recent compliance checks conducted by the Office of the Privacy Commissioner for Personal Data (PCPD) in May 2025, a staggering 80% of organizations (48 out of 60) reported integrating AI into their daily operations. This rapid integration raises significant concerns regarding data privacy and governance. Therefore, the PCPD has proactively issued practical guidance aimed at helping organizations navigate the complexities associated with the deployment of AI technologies.
Purpose of the Guidance
In recognition of the unique challenges posed by AI, the PCPD’s new guidance encourages businesses to refer to existing resources like the "Checklist on Guidelines for the Use of Generative AI by Employees." This guidance aims to assist organizations in creating robust internal policies that effectively address the inherent risks associated with AI usage.
Key Areas of Focus for AI Policies
The PCPD emphasizes several critical aspects that organizations must consider when formulating their internal AI policies. These include:
Scope of Permissible Use: Organizations should clearly identify which Generative AI (Gen AI) tools are approved for use and define the specific permitted use cases, such as content creation and document drafting. Clarity about the scope is paramount; organizations must determine whether the policy applies universally to all employees or is limited to specific departments.
Protection of Personal Data Privacy: Internal AI policies must provide clear guidelines on data input and output during the use of Gen AI tools. Organizations should delineate the types and volumes of data that can be inputted, outline acceptable use cases for AI-generated outputs, and establish guidelines for the storage and retention of such information to ensure compliance with data privacy standards.
Lawful and Ethical Use and Prevention of Bias: An ethical framework is necessary for AI usage. The internal policy needs to prohibit unlawful activities and mandate that all AI-generated outputs undergo human review to ensure accuracy and eliminate potential biases. This includes guidelines for watermarking or labeling AI-generated content to signify its origin.
Data Security: Organizations must specify which employees are authorized to access Gen AI tools, detailing the devices permitted for use. Strong security protocols, including robust user authentication and credentials, should be implemented. Employees must also be educated on the importance of reporting AI-related incidents as part of the organization’s incident response plan.
- Violations of AI Policy: Clear consequences for non-compliance with the internal AI policy should be outlined. For broader AI governance, organizations can refer to the PCPD’s "Artificial Intelligence: Model Personal Data Protection Framework," issued in 2024.
Supporting Responsible Use: Practical Measures
To ensure effective governance and responsible use of AI, the PCPD proposes several practical support measures, including:
- Regular communications with employees concerning updated policies and guidelines.
- Targeted training initiatives tailored to employee roles.
- Establishing a designated support team within the organization to aid implementation.
- Implementing a feedback mechanism designed to foster ongoing improvements in AI use and policy.
Actionable Takeaways for Organizations
To align with the expectations of the PCPD and mitigate potential AI-related risks, organizations should consider the following strategies:
Comprehensive Review: Conduct a thorough evaluation of all AI tools and use cases to identify any processing of personal data. Ensure all AI tools are approved and outline their permitted use cases and the designated user groups. New tools should require pre-approval before adoption.
Strict Data Guidelines: Prohibit inputting sensitive or confidential information into public AI tools. Implement stringent evaluations to ensure compliance with data privacy laws when processing personal data in AI systems. Establish clear protocols for the storage, retention, and categorization of AI-generated outputs.
Designated Reviewers: Assign reviewers for high-risk use cases who can ensure thorough fact-checking and bias assessment before disseminating AI-generated content externally.
Robust Data Security: Implement strong authentication mechanisms, encryption protocols, and secure configuration standards while restricting AI tool usage to approved devices only.
Individualized Training: Provide tailored, role-specific training sessions, ensuring employees clearly understand the internal AI policies and guidelines.
Ongoing Audits: Conduct regular audits of AI tool usage, gather employee feedback for continuous refinement, and update internal policies as necessary to adapt to regulatory changes.
- Regulatory Compliance: Consistently ensure that internal AI policies align with existing data privacy laws, including the Personal Data (Privacy) Ordinance (PDPO).
Conclusion
The PCPD’s recent guidelines illuminate the pressing need for organizations in Hong Kong to adopt a structured, proactive approach to AI governance. By establishing comprehensive internal policies that address ethical, lawful, and responsible AI usage, organizations can navigate the complexities of AI while minimizing risks associated with data privacy violations.
As AI continues to transform various sectors, the emphasis from the PCPD serves as a timely reminder for businesses to integrate these guidelines within their operational frameworks. Proactive engagement in this regard will not only ensure compliance with regulatory standards but also contribute significantly to fostering a responsible AI ecosystem in Hong Kong.
Organizations should heed these insights and prioritize the establishment of robust governance frameworks as they harness the capabilities of AI technologies. The PCPD’s guidance is not merely a regulatory formality but a crucial step toward responsible innovation in the era of AI.








