By: Nicolas Côté, Head of Cybersecurity Practice — Solulan
Nicolas Côté supports SMEs and large organizations in protecting their technological environments and managing information security risks. Recognized for his ability to clearly explain complex issues, he guides organizations toward secure practices and the responsible adoption of new technologies, including artificial intelligence.
Artificial intelligence is gradually being integrated into businesses, often without an official framework having been established. In many SMEs, employees are already using AI tools to write messages, analyze data, create content, or simplify repetitive tasks. This spontaneous adoption reflects the real value AI can bring to productivity and decision-making.
However, the informal, and especially ungoverned, use of these tools carries significant risks. Many public AI platforms retain the information submitted to them, whether it be text, files, customer data, or internal information. In some cases, this data may be stored in countries where privacy regulations differ significantly from those in force in Canada.
For an SME, this represents a challenge not only in terms of security, but also compliance, particularly in relation to Law 25 in Quebec.
An underestimated risk: AI used without governance
When each employee chooses their own AI tool—often free and downloaded in just a few seconds—the organization quickly loses visibility over where its information is circulating. Privacy practices vary from one platform to another, as does the location where data is hosted. In addition, user prompts frequently contain sensitive data, sometimes without the user realizing it.
The multiplication of tools therefore increases the risk of data leaks or loss of control.
Moreover, AI relies on the access already granted to employees. If privileges are not properly configured, an employee may inadvertently obtain information from AI that they should not be allowed to access, simply because internal permissions allow it. This phenomenon has been demonstrated in concrete examples where AI was able to answer sensitive questions when access rights were too broad.
Governing AI: a simple approach that protects your organization
The solution is not to limit the use of AI, but rather to govern it in a structured and consistent way. A few well-targeted measures are enough to ensure data security while fully benefiting from the advantages of AI in the workplace.
1. Choose tools suited to SME needs
Professional AI platforms, such as Microsoft Copilot in its Enterprise version, offer better data protection. They notably ensure that:
- data is not used to train the model;
- files, prompts, and information remain within the company’s Microsoft 365 environment;
- privacy policies comply with strict standards (SOC 2, ISO 27001, etc.);
- access is managed through your existing system.
This type of tool reduces the risk of data leaks and helps meet compliance requirements, including those related to Law 25.
2. Define an internal AI usage policy
An AI policy does not need to be complex to be effective. It should specify:
- the officially authorized tools within the organization;
- the expected use by employees (processing data, assisting with email writing, producing reports, etc.);
- the types of information that can be provided to AI;
- the data that must be strictly excluded (salaries, HR files, financial data, sensitive customer information);
- the responsibilities of each individual regarding confidentiality.
A well-defined policy helps limit risks while maintaining flexibility of use.
3. Review access rights and permissions before deploying AI
Since AI is based on user access rights, a well-established permission structure is essential. It is recommended to:
- apply the principle of least privilege;
- segment workspaces and folders according to roles;
- review external shares that are still active;
- strengthen credential protection through multi-factor authentification (MFA).
These simple but decisive measures greatly reduce the risk of unintentional disclosure.
4. Raise awareness and train employees
Basic training helps avoid many risky situations. It can cover:
- recognizing sensitive data;
- best practices when using AI;
- writing clear and secure prompts;
- validating AI-generated results.
Training teams not only improves security, but also optimizes the benefits the organization can gain from AI.
Examples of impact in comparable organizations
Some organizations that have structured their AI usage have observed tangible gains. For example:
- A manufacturing SME with approximately 120 employees reduced the time required for customer communications by nearly 30% after adopting a professional AI tool and implementing an access audit.
- A multi-site organization with approximately 800 employees accelerated its financial close cycle by one and a half days thanks to an AI policy, the blocking of unapproved tools, and the consolidation of internal practices.
These results demonstrate that productivity increases when AI is adopted in a secure and thoughtful manner.
Conclusion: an innovation to master in order to better leverage it
AI represents a significant opportunity for SMEs to improve efficiency, reduce repetitive tasks, and support growth. However, free and ungoverned use exposes the organization to privacy, compliance, and security risks.
By choosing an appropriate AI platform, defining a clear policy, reviewing access rights, and providing basic training to employees, it is possible to integrate AI in a secure and sustainable way. These simple measures allow your SME to fully benefit from innovation while protecting its data and meeting current regulatory requirements.
Solulan supports organizations in this process by offering audits, advisory services, governance support, and IT integration services tailored to the needs and realities of SMEs.
— Nicolas Côté, Head of Cybersecurity Practice, Solulan