Article
Introduction
Banks in Canada have a longstanding commitment to technological innovation and are increasingly supporting the development and adoption of AI through both in‑house initiatives and external partnerships. Significant investments in AI and related technologies are improving how banks serve customers, enhancing risk management, and improving efficiency.
Overview
The CBA and its members support the responsible development and use of AI, particularly given its growing role in fraud detection, cybersecurity, operational efficiency, and customer service within the financial sector.
Banks currently manage the risks associated with AI and other technologies responsibly through long‑standing, sector‑specific regulatory requirements and internal frameworks, including model risk management, third‑party oversight, and technology and cyber risk controls.
As the government continues to promote innovation and support the adoption of AI across sectors, any future consideration of rules or guidance should be approached with care to avoid duplicating existing obligations or creating fragmented oversight. In the financial sector, where strong regulatory foundations are already in place, efforts should remain coordinated and clearly scoped to maintain alignment with existing frameworks while supporting continued innovation, with ongoing consultation with industry.
The following reflects the CBA’s comments in response to the relevant questions outlined in the federal government’s consultation on Canada’s AI Strategy.
Research and talent
Consultation Questions
- How does Canada retain and grow its AI research edge? What are the promising areas that Canada should lean in on, where it can lead the world?
- How can Canada strengthen coordination across academia, industry, government and defence to accelerate impactful AI research?
- What conditions are needed to ensure Canadian AI research remains globally competitive and ethically grounded?
- What efforts are needed to attract, develop and retain top AI talent across research, industry and the public sector?
The CBA’s Response
Canada needs a clear strategy for retaining and growing AI talent. The CBA supports a two‑stream approach: first, by creating faster pathways for qualified professionals through targeted immigration and talent‑mobility programs; second, by expanding domestic training through public‑private partnerships with universities, innovation hubs, and financial‑sector employers. Introducing an AI training credit per employee would also encourage firms to upskill their people on practical AI tools in areas such as large language models, agentic systems, and related applications.
To stay globally competitive and ethically grounded, Canada should adopt interoperable standards that line up with internationally recognized frameworks such as the NIST AI Risk Management Framework. Common testing, validation, and data-provenance standards would strengthen research collaboration and reduce duplication, while providing clear ethical guardrails.
We recommend designating or empowering an existing or new interdisciplinary body, in collaboration with the Canadian AI Safety Institute (CAISI), relevant federal agencies, and industry consortia, to convene public‑private‑academic partnerships, manage knowledge sharing, and create pipelines for top post‑secondary talent into finance‑oriented AI research. Within this framework, Canada can build on its ongoing work with the International Network of AI Safety Institutes by prioritizing research on aligning AI with human values, mitigating risks from synthetic content, and stress‑testing agentic AI systems. This combination of institutional stewardship and domain focus would strengthen Canada’s capacity to translate foundational AI safety research into practical, sector‑specific applications for the financial industry.
Accelerating AI adoption by industry and government
Consultation Questions
- Where is the greatest potential for impactful AI adoption in Canada? How can we ensure those sectors with the greatest opportunity can take advantage?
- What are the key barriers to AI adoption, and how can government and industry work together to accelerate responsible uptake?
- How will we know if Canada is meaningfully engaging with and adopting AI? What are the best measures of success?
The CBA’s Response
Progress on AI adoption is slowed by fragmented regulation and overlapping expectations across prudential, conduct, and privacy regulators. A coordinated, whole‑of‑government approach, grounded in proportionality and clear roles, would create a more predictable environment for responsible deployment.
Cross‑border inconsistencies, especially where U.S. jurisdictions impose stricter AI rules, also push Canadian institutions to default to the toughest global standard, dampening domestic innovation. A Canadian AI regulatory sandbox would provide a controlled space for supervised testing, similar to models in the UK, Singapore, and Australia. Alignment on third-party assurance and testing standards would further help smaller vendors participate confidently in the ecosystem.
The government could consider how to measure adoption success, focusing on metrics that capture meaningful integration rather than experimentation. Insights from the G7 Cyber Expert Group suggest that an adoption framework could include indicators such as uptake across critical infrastructure sectors, productivity gains, levels and sources of investment, the types of AI technologies deployed, and trends in AI‑related labour demand. In addition to these quantitative measures, success should also be assessed by the extent to which the objectives of each use case are achieved while ensuring that associated risks are effectively managed and mitigated. Demonstrating what successful and responsible AI adoption looks like will be essential to fostering public trust and encouraging wider deployment across the economy.
Commercialization of AI
Consultation Questions
- What needs to be put in place so Canada can grow globally competitive AI companies while retaining ownership, IP and economic sovereignty?
- What changes to the Canadian business enabling environment are needed to unlock AI commercialization?
- How can Canada better connect AI research with commercialization to meet strategic business needs?
The CBA’s Response
Growing globally competitive AI firms requires a trusted and consistent assurance environment. Clear, standardized evaluation requirements, covering explainability, testing, and model assurance, would help Canadian vendors meet financial‑sector expectations. Government can accelerate commercialization through public procurement, innovation partnerships, and ecosystem funding that pull solutions into production.
Equally important is investment in foundational infrastructure such as compute capacity and high‑quality training datasets, along with initiatives to support AI SMEs and startups. Canada should take a balanced approach to cloud and data sovereignty, such as continuing to leverage trusted global infrastructure while building domestic compute and standards capacity.
Canada should encourage AI commercialization by building on earlier investments in the Commercialization pillar of the Pan‑Canadian AI Strategy and supporting public‑private partnerships to facilitate the shift from AI R&D to commercialization.
Scaling Canadian champions and attracting investments
Consultation Questions
- How does Canada get to more and stronger AI industrial champions? What supports would make our champions own the podium?
- What changes to Canada’s landscape of business incentives would accelerate sustainable scaling of AI ventures?
- How can we best support AI companies to remain rooted in Canada while growing strength in global markets?
- What lessons can we learn from countries that are successful at investment attraction in AI and tech, both from domestic sources and from foreign capital?
The CBA’s Response
Scaling Canadian AI champions depends on a stable, predictable ecosystem. Regulatory language and expectations need to be consistent across agencies so that firms (and investors) know what "transparency," "fairness," or "explainability" actually mean in practice.
Governments can use procurement levers and targeted incentives to help smaller domestic AI vendors meet sector‑specific requirements and grow into mid‑sized, export‑ready firms. International regulatory sandboxes remain a good model for balancing oversight and innovation, providing pathways for companies to scale responsibly.
Comparable international initiatives illustrate how coherent investment‑attraction strategies can reinforce domestic scaling efforts. The European Union combines regulatory alignment with coordinated research and infrastructure funding through the European High‑Performance Computing Joint Undertaking (EuroHPC JU) and related Horizon Europe programs, which advance compute capacity and AI‑intensive research. France’s AI Campus initiative, supported by public and private partners, seeks to strengthen national AI research and commercialization capacity. The United Kingdom’s Modern Industrial Strategy integrates AI within a broader framework linking innovation funding, industrial policy, and skills development to attract investment and accelerate responsible technology adoption.
Building safe AI systems and strengthening public trust in AI
Consultation Questions
- How can Canada build public trust in AI technologies while addressing the risks they present? What are the most important things to do to build confidence?
- What frameworks, standards, regulations and norms are needed to ensure AI products in Canada are trustworthy and responsibly deployed?
- How can Canada proactively engage citizens and businesses to promote responsible AI use and trust in its governance? Who is best placed to lead which efforts that fuel trust?
The CBA’s Response
Building public confidence starts with balanced transparency, which is enough to explain how AI is used and governed, but not so much that it exposes sensitive models or third‑party intellectual property. Public education and awareness initiatives should accompany any regulatory rollout so Canadians understand both the benefits and limits of AI.
Canada should invest in detection and mitigation of emerging threats, such as deepfakes and fraud‑enabling tools, while anchoring oversight in interoperable standards like the NIST AI RMF. This approach ensures concrete assurance processes without unnecessary complexity or duplication.
To reinforce these objectives, existing regulatory and supervisory frameworks can be leveraged to reinforce proportionate expectations for transparency, explainability, and interpretability across the AI supply chain. Financial institutions cannot meaningfully assess or manage AI‑related risks without a corresponding duty on third‑party vendors to disclose relevant information about data provenance, model governance, and downstream dependencies. Guidance within current financial sector frameworks should focus on aligning institutional accountability with vendor obligations, ensuring visibility into training data, model inputs, and other fourth‑ or fifth‑party sources where applicable. This alignment would strengthen trust in AI‑enabled financial systems while maintaining coherence between organizational oversight and third‑party assurance mechanisms.
Education and skills
Consultation Questions
- What skills are required for a modern, digital economy, and how can Canada best support their development and deployment in the workforce?
- How can we enhance AI literacy in Canada, including awareness of AI’s limitations and biases?
- What can Canada do to ensure equitable access to AI literacy across regions, demographics and socioeconomic groups?
The CBA’s Response
AI readiness depends on the workforce. Government should back public‑private training partnerships that reskill and upskill workers, complemented by an AI training credit to offset employer costs. Immigration and talent-mobility programs should be expanded to fill near‑term skill gaps until domestic pipelines mature.
To ensure equitable AI readiness, Canada should adopt a coordinated national approach to AI literacy that broadens participation across demographics and regions. Public‑private‑academic partnerships can play a central role by integrating AI‑focused training into non‑technical post‑secondary programs, including social sciences, business, and related disciplines. Targeted incentives could encourage financial institutions, and other industry sectors, to collaborate with academia in developing such programs and to expand cooperative placements and internships that translate conceptual literacy into applied experience. This approach aligns with international principles, such as UNESCO’s Global Call for Action on AI Literacy and the New Digital Divide, which underscores the importance of inclusive education, local engagement, and continuous learning as foundations for digital equity.
Building enabling infrastructure
Consultation Questions
- Which infrastructure gaps (compute, data, connectivity) are holding back AI innovation in Canada, and what is stopping Canadian firms from building sovereign infrastructure to address them?
- How can we ensure equitable access to AI infrastructure across regions, sectors and users (researchers, start‑ups, SMEs)?
- How much sovereign AI compute capacity will we need for our security and growth, and in what formats?
The CBA’s Response
AI development and deployment require robust compute and data infrastructure. Canada should take a pragmatic stance on cloud and data sovereignty. This should be by continuing to use trusted global cloud services while investing in Canadian capacity, open‑source models, and standards. Direct investment in compute power and curated datasets would reduce bottlenecks and help Canadian innovators compete globally.
Security of the Canadian infrastructure and capacity
Consultation Questions
- What are the emerging security risks associated with AI, and how can Canada proactively mitigate future threats?
- How can Canada strengthen cybersecurity and safeguard critical infrastructure, data and models in the age of AI?
- Where can AI better position Canada’s protection and defence? What will be required to have a strong AI defensive posture?
The CBA’s Response
The security focus should be twofold: first, managing concentration risks from dependence on a small number of global cloud providers; second, harnessing AI itself to strengthen cybersecurity and fraud detection. These priorities can be effectively addressed within existing regulatory expectations, with guidance remaining principles-based and technology-neutral to allow firms to adapt as threats evolve.
Coordinated oversight between Finance, ISED, and national-security partners will be essential to ensure resilience without layering duplicative obligations.
Canada should strengthen its national AI‑enabled defence posture by combining targeted investment with coordinated operational mechanisms. The government’s planned $560 million investment to reinforce Canada’s digital foundations should explicitly include measures to detect, mitigate, and defend against AI‑driven cyber threats. Existing public‑private information‑sharing channels could be leveraged to identify AI‑specific threat indicators, exchange red‑flag intelligence, and promote defensive innovation across sectors. For federally regulated financial institutions, embedding these efforts within existing, principles‑based, technology‑neutral oversight framework would enhance national resilience while ensuring alignment between financial‑sector security initiatives and Canada’s broader defence strategy.
Conclusion
The CBA appreciates the opportunity to provide these comments and remains available to discuss these recommendations further.