Home / TECHNOLOGY / Shadow Artificial Intelligence (AI) vs Managed AI: Kaspersky reviews the use of neural networks for work in the Middle East, Turkiye and Africa (META) region

Shadow Artificial Intelligence (AI) vs Managed AI: Kaspersky reviews the use of neural networks for work in the Middle East, Turkiye and Africa (META) region

Shadow Artificial Intelligence (AI) vs Managed AI: Kaspersky reviews the use of neural networks for work in the Middle East, Turkiye and Africa (META) region

Recent research conducted by Kaspersky, titled “Cybersecurity in the Workplace: Employee Knowledge and Behaviour,” has shed light on the growing use of Artificial Intelligence (AI) tools across the Middle East, Turkiye, and Africa (META) region. According to the study, 81.7% of professionals report using AI tools in their daily work tasks. While this adoption is promising, a critical concern has surfaced regarding cybersecurity training: only 38% of participants received training specifically focused on the cybersecurity implications of using neural networks.

Key Insights from Kaspersky’s Research

The survey involved 2,800 participants from seven countries, including South Africa, Kenya, and Egypt. A remarkable 94.5% of respondents indicated familiarity with the term "generative artificial intelligence." However, there’s a troubling gap in cybersecurity training even as these AI tools become integrated into workplace routines. For instance, many employees rely on AI for tasks like writing and editing texts (63.2%), managing emails (51.5%), creating images or videos (45.2%), and performing data analytics (50.1%).

Of particular concern is the finding that a third of respondents (33%) have not undergone any form of AI training, and even among those who did receive training, 48% focused primarily on effective tool usage rather than on associated cybersecurity risks. This lack of preparedness could expose organizations to significant risks, including data leaks and prompt injections.

Shadow AI: A Growing Concern

Interestingly, the survey highlights a phenomenon referred to as "shadow IT," where employees use AI tools without corporate oversight. While 72.4% of respondents indicated that generative AI tools are permitted at their work, a substantial 21.3% acknowledged that these tools are not allowed. This discrepancy points to potential governance issues within organizations regarding AI use, which could lead to vulnerabilities that cybercriminals are keen to exploit.

To effectively harness the benefits of AI while mitigating risks, organizations are encouraged to implement clear policies outlining acceptable AI usage. These policies should delineate prohibited functions and sensitive data types, specify which AI tools employees can use, and formalize the documentation of these guidelines. Additionally, companies must ensure that employees receive adequate training to understand both the benefits and risks associated with AI.

Recommended Actions for Organizations

Chris Norton, General Manager for Sub-Saharan Africa at Kaspersky, emphasizes a balanced and tiered approach. Organizations should avoid extremes such as outright bans or unrestricted access. Instead, implementing a tiered access model, tailored to the data sensitivity of each department, will allow for AI’s innovative potential while maintaining security. Here are several recommendations for organizations to bolster AI-related cybersecurity:

  1. Conduct Employee Training: Incorporate courses focused on responsible AI usage, like those offered through Kaspersky’s Automated Security Awareness Platform, into training programs.

  2. Update IT Knowledge: Provide IT specialists with up-to-date training on exploitation techniques and effective defense strategies. The ‘Large Language Models Security’ training from Kaspersky can help enhance cybersecurity proficiency.

  3. Secure Devices: Ensure that all employees have robust cybersecurity solutions installed on their devices, including personal devices that access business data. This step is crucial to protect against threats such as phishing and deceptive AI applications.

  4. Regular Monitoring: Conduct regular surveys to ascertain the frequency of AI tool usage and the specific tasks they are employed for. This ongoing assessment allows organizations to evaluate both the benefits and risks associated with AI utilization and adjust policies accordingly.

  5. Implement AI Proxies: Utilize specialized AI proxies that can filter queries in real-time by removing sensitive data. Role-based access controls can block inappropriate use cases effectively.

  6. Develop Comprehensive Policies: Establish a full-fledged policy that addresses various risks related to AI use. Kaspersky’s guidelines for securely implementing AI systems can serve as a useful resource in this effort.

Conclusion

Kaspersky’s research underscores the critical need for organizations in the META region to balance the integration of AI tools with comprehensive cybersecurity measures. While AI can enhance productivity and facilitate innovative solutions, these benefits come with inherent risks that must be managed effectively. The key takeaway is that a structured and well-informed approach to AI usage will not only protect sensitive data but also foster an environment where employees can leverage AI technology safely and effectively.

As organizations continue to navigate this ever-evolving landscape, proactive measures and continuous training are paramount for safeguarding against potential cyber threats associated with both Shadow AI and Managed AI utilization.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *