Future of AI and Cybersecurity: The Power of Partnership

Friday, May 19th, 2023


Cyber security risk management solutions from DefenseStorm.

As artificial intelligence (AI) technology enriches various aspects of our lives, a critical question remains regarding whether its capabilities and implications will drive cybersecurity toward success or serve as a catalyst for its downfall.

In our rapidly evolving digital landscape, the emergence and evolution of artificial intelligence (AI) has created a provocative conflict about responsible innovation. AI is a technological advancement that provides a host of benefits for organizations across all industries. As this innovative technology enriches various aspects of our lives, a critical question remains regarding whether its capabilities and implications will drive AI and cybersecurity towards success or serve as a catalyst for its downfall.

In light of AI and machine learning being described as a double-edged sword, understanding the advantages and drawbacks becomes crucial in order to fully observe their impact on cybersecurity within the financial sector. Financial institutions (FIs) are inclined to implement AI because of its ability to provide automated tasks and processes that would otherwise prove tedious and repetitive. AI is currently used for data analysis, risk assessment, fraud detection, and customer support solutions. Now, the invention of ChatGPT and similar tools boast a whole new level of potential transformation for FIs. ChatGPT is designed to simulate human-like conversations and generate responses to conversational prompts. Nevertheless, like any other technology, implementing these capabilities can have unintended outcomes or can even be employed for malevolent intentions.

Some of the prevailing concerns include:

  • Updating internal policies: In order to extract business value from AI technology while mitigating the risk of data exposure and preventing data leakage, it is crucial to update internal policies before tool implementation. This entails not only training users and raising awareness but also ensuring that any new processes are understood and closely monitored for unforeseen results.
  • Optimized cyber attacks: AI is not only being utilized for defensive purposes in cybersecurity but also has the potential to enhance threat vectors, especially targeted social engineering. These tools can utilize publicly available data to improve the language and credibility of text-based attacks, such as phishing, smishing, business email compromise (BEC), angler phishing, and more.
  • Skill gaps and stress: Internal cybersecurity teams are already burdened with the responsibility of managing ever-evolving threats and the arsenal of products needed for safeguarding networks. AI technologies add an additional layer of complexity that demands specialized expertise and proficiency for successful implementation and optimal utilization.

To proactively prevent or reduce the impact of cyberattacks from the use of AI, FIs should:

1. Conduct regular cyber security risk assessments as the technology and capabilities on both sides of the wall evolve. Adversaries will modify their approaches, but vendors will continue to adapt their product lines as well.

2. Expand research on AI technology and invest in AI solutions that complement your existing security stack and align with acceptable risk levels is paramount. Cybersecurity and IT teams should prioritize diving deeper into understanding AI-based security measures, including behavioral analytics, anomaly detection, and threat intelligence. Equip them with the knowledge and skills to identify potential threats and respond promptly and effectively.

3. Foster a culture of continuous learning, recognizing that knowledge is a powerful antidote to fear and uncertainty.

  • Develop comprehensive training programs that offer insights into the fundamentals of AI technology, encompassing its applications, benefits, and limitations. These programs can be delivered through online courses or self-paced learning modules.
  • Provide accessible AI tools that enable individuals to explore the technology firsthand and gain practical experience.
  • Cultivate an environment of open discussion and collaboration to facilitate knowledge sharing and spur innovation.
  • Prioritize upskilling initiatives for employees at all levels, from senior leadership to frontline teams. While hiring experts is an option, a pragmatic approach is to equip your existing workforce with the knowledge and skills necessary to engage in informed conversations with these experts.

4. Stay updated on the latest security awareness training resources encompassing fundamental topics such as AI, deepfakes, phishing attacks, and malware attacks.

  • Recognize that AI technology can render phishing attacks tailored to specific individuals or organizations making them more challenging to detect.
  • Be aware of the risks posed by deepfakes, which can be employed to impersonate individuals or organizations, disseminate misleading information, or gain unauthorized access to sensitive data.
  • Acknowledge that malware is advancing in sophistication, with evasive strains capable of evading traditional security measures.

5. Misinformation/disinformation training: Equip employees with essential skills to navigate the digital landscape responsibly, focusing on common tactics such as emotional appeals, cherry-picking data, and the misrepresentation of facts.

  • Promote the practice of fact-checking before sharing information with others. Emphasize the importance of verifying the accuracy of information from reputable sources and identifying potential conflicts of interest.
  • Foster media literacy, empowering individuals to access, analyze, and evaluate media content. Educate employees on recognizing biases in media and understanding how political, economic, and social factors can influence information. Cultivating the ability to pause and reflect on the quality of information received has proven crucial in countering scams and social engineering attacks.
  • Maintain a strong connection with the company culture, addressing concerns and dispelling rumors surrounding the implementation of AI technology. By addressing potential fears and ensuring transparency, the risk of insider threats, both unintentional and malicious, can be mitigated by preserving morale and trust within the organization.

The Future of AI and Cybersecurity

One of the primary concerns regarding the future of cybersecurity is the potential job displacement caused by AI. However, it is important to recognize that the integration of AI technology is not a straightforward replacement for humans but a complex process that requires addressing this concern. As technology advances, human involvement remains vital in implementing, fine-tuning, and monitoring these tools.

Employees contribute indispensably to a company, providing distinctiveness that is valuable for their brand and service delivery. Widespread reliance on AI to deliver services and products could lead to the creation of formulaic and indistinguishable companies. Thus, it is necessary to consider whether AI can genuinely maintain the uniqueness of your services and products.

Collaborate with employees to upskill or transition into alternative roles, enabling them to continue contributing to the company’s success while leveraging AI to enhance efficiency. Given current talent gaps and the difficulty in filling critical roles, it is essential to nurture loyal employees whose goals and accomplishments are aligned with organizational objectives.

Key point: Prioritize the well-being of your employees, and they will reciprocate by caring for the business, as the human element consistently adds value and drives success. To effectively embrace progress, preserve distinctiveness, and deliver efficient cyber risk management, a collaborative approach between technology and the human element is crucial. Capitalize on the strengths of both human and machine intelligence to achieve greater results.

Amidst the hype, panic, and uncertainty surrounding AI’s impact on cybersecurity in the financial sector, there is still a vast realm of knowledge to explore. What remains constant, much like in other aspects of life, is the principle of reaping what we sow. While cybersecurity professionals may experience a mix of apprehension and excitement towards the latest AI innovations, reinforcing the human firewall stands as a crucial element in a robust cyber security risk management strategy. Regardless of technological advancements, prioritizing the human element will always be vital for ensuring success and maintaining a comprehensive approach to cybersecurity.

Elizabeth Houser

Elizabeth Houser

Director, Cyber Defense

Elizabeth Houser is the Director of Cyber Defense for DefenseStorm and has engaged in roles ranging from security engineer and SOC manager to her current responsibilities for social engineering, vulnerability management, and tabletop services. Prior to joining DefenseStorm, Elizabeth volunteered at King County Sheriff’s Office Major Crimes Unit while completing her degree in Information Security and Digital Forensics, being awarded Volunteer of the Year for her service. In addition to earning the CISSP, Elizabeth’s certifications include the CISA, CISM, CRISC and CGEIT from ISACA as well as a Master of Library Information Science degree from the University of Washington and an MS in Entomology from the University of Tennessee. Elizabeth currently serves on the Computer Information Systems advisory board for Edmonds College