As the conversation around artificial intelligence (AI) continues to evolve, recent findings from OpenMedia highlight a rising tide of concern among Canadians regarding the unregulated adoption of AI technologies. OpenMedia’s community survey reveals a populace increasingly worried about the implications of AI, particularly around issues of misinformation, fraud, and surveillance—amplifying calls for proactive, people-centric regulations that can effectively balance AI’s potential benefits against its significant risks.
The Context of AI in Canada
Generative AI tools like ChatGPT have become commonplace, influencing various sectors such as education, healthcare, and business. However, while these technologies offer undeniable benefits in efficiency and ease of access to information, they simultaneously pose serious ethical questions. Who is accountable for the outputs generated by these systems? How will they reshape standards for privacy and data security? As AI adoption gains momentum, it’s crucial that regulatory frameworks evolve accordingly.
Internationally, nations like the United States and China are adopting "innovation-first" stances, prioritizing rapid development over comprehensive regulatory approaches. This places Canada, with its comparatively smaller AI development landscape, in a precarious position. As a result, Canadians must advocate for their interests amidst decisions made in regions like Silicon Valley and Beijing.
Despite this urgent need for governance, Canada’s own legislative efforts have stalled. The proposed AI and Data Act (AIDA)—part of Bill C-27—was withdrawn when Parliament was prorogued. Even with the appointment of an AI minister in June 2025, there has been little clarity on how Canada intends to navigate the complexities introduced by AI technologies.
Survey Insights
In August 2025, OpenMedia conducted a comprehensive survey with over 3,000 Canadian respondents to assess public sentiment regarding AI. The findings painted a clear picture of public apprehension:
Usage Trends: A significant portion of the population (49%) does not regularly use AI; over 70% use it only occasionally. Those who do utilize AI primarily engage with it for work (23%) and education (15%).
Risk Awareness: Nearly 60% of respondents expressed greater concern over the risks posed by AI compared to its benefits. Only 5% felt more excited than worried about AI’s potential.
Concerns About Misinformation: An overwhelming 89% of respondents voiced their fears regarding AI’s role in generating misinformation and deepfake content, followed by concerns about criminal activities (76%) and the escalation of surveillance (71%).
Push for Regulation: A strong majority (64%) expect regulations similar to the EU’s to be implemented in Canada, advocating for proactive measures that address potential harms before new technologies are released into the marketplace.
Creative Works and Copyright: Many respondents (77%) believe AI systems should not train on copyrighted materials unless proper consent and compensation are provided.
- Prioritizing Safety: An overwhelming 97% of participants deemed the development of AI for fraudulent purposes as a criminal offense, revealing a clear demand for stringent regulatory frameworks across various sectors, particularly in government, media, and healthcare.
The Community’s Call for Regulation
The survey results underscore a pressing demand for a regulatory framework that encompasses ethical, legal, and societal dimensions. Among the survey commentary, a recurring theme emerged emphasizing the role of transparency and accountability in AI development:
- Creative Control: Artists and creators deserve authority over whether their works are used in AI training, with compensation for any commercial exploitation of their intellectual property.
- Informed Consent: Canadians expressed a desire for policies that allow individuals to opt-out of having their personal data used for AI applications and to ensure that any sensitive data is securely handled.
- Tailored Approaches: Respondents rejected one-size-fits-all policies, advocating for nuanced regulations that distinguish between various AI types and their respective applications, acknowledging both the potential benefits and risks associated with each.
Comments from respondents demonstrated a widespread concern about corporate monopolies exploiting AI technologies at the expense of individual rights and privacy. Many highlighted the dangers of unchecked AI influence in critical areas like education and public discourse, calling for a balanced approach that prioritizes human values, such as fairness and democracy.
The Path Forward
Canada currently stands at a crossroads regarding the future of AI governance. With the commencement of a newly formed task force under AI Minister Evan Solomon, there is an opportunity for enhanced public engagement in shaping policy. Yet, given the swift pace of AI development and its pervasive impact on everyday life, action is desperately required.
Canadians are increasingly voicing their apprehensions about the unchecked proliferation of AI technologies and the associated risks. The insights gained from the OpenMedia survey provide a clear mandate for policymakers: the voices of everyday Canadians should inform AI regulation, ensuring these laws prioritize public rights, privacy, and sustainability above corporate interests.
As discussions about AI governance move forward in Ottawa, stakeholders must listen to public sentiment. To craft effective and meaningful regulations, Canada should lean toward models that emphasize transparency, equity, and accountability, drawing lessons from more proactive international frameworks our community can trust.
Conclusion
OpenMedia’s survey findings represent a snapshot of a nation in need of clarity and security regarding AI technologies. When considering the government’s agenda and its focus on innovation and economic growth, the voices of Canadians indicate that there is a glaring need for a recalibration towards responsible AI adoption. The future trajectory of AI in Canada must be shaped by a careful blend of innovation and regulation that safeguards citizens’ rights and preserves the integrity of democracy—after all, the decisions made today will resonate for decades to come.
As we proceed, the time for meaningful action is now. Canada has the chance to lead with a people-first approach to AI governance that others may well emulate. Establishing a framework that genuinely protects and promotes public interests will not only realize the potential of AI but will do so in a manner that aligns with Canadians’ rights and values.









