Rapid adoption of Artificial Intelligence has powered growth but opened avenues for cybercriminals to misuse AI for sophisticated attacks, Kaspersky has said spotlighting the need for businesses to invest in proactive cybersecurity defences to meet new-age challenges.
Kaspersky, a global cybersecurity and digital privacy company, said it has been infusing AI across its products and harnessing AI models to counter threats and safeguard users by making technologies more resistant to new and evolving forms of cyberattacks.
From leveraging ChatGPT for writing malicious software and automating attacks against multiple users to misusing AI programmes to track users' smartphone inputs (potentially capturing messages, passwords, and bank codes) - cybercriminals are using AI in novel ways, the company cautioned.
Citing the data for 2023, the company said it protected 220,000 businesses across the globe and prevented around 6.1 billion attacks with its solutions and products.
During the same period, 325,000 unique users were saved from potential money theft based on banking trojans, it added.
Also Read
On average, the company has been detecting over 411,000 malicious samples every day in 2024 against 403,000 such samples a year ago.
"The number of cyberattacks being launched is not possible only with human resources. They (attackers)...use automationtry to leverage AI," Vitaly Kamluk, cybersecurity expert of Global Research & Analysis Team (GReAT) at Kaspersky told PTI.
In a recent research on using AI for password cracking, Kaspersky found that most passwords are stored encrypted with cryptographic hash function.
A text password can be simply converted to an encrypted line. However, it is challenging to reverse the process, it said.
The largest leaked password complication to date had about 10 billion lines with 8.2 billion unique passwords, according to its July 2024 data.
Alexey Antonov, Lead Data Scientist at Kaspersky, said, "Wefound that 32 per cent of user passwords are not strong enough and can be reverted from encrypted hash form using a simple brute-force algorithm and a modern GPU 4090 in less than 60 minutes.
According to the company, threat actors can use large language models like ChatGPT-4o for generating scam text, such as sophisticated phishing messages.
AI-generated phishing can overcome language barriers and create personalised emails based on users' social media information. It can even mimic specific individuals' writing styles, making phishing attacks potentially harder to detect.
Ethan Seow, Co-founder of C4AIL, said, "The moment ChatGPT came out, there was a 90 times increase in spam emails to organisations in terms of phishing.
The aggressive adoption of GenAI by organisations has also increased the attack surface. Simultaneously, cyberattackers are having more sophisticated ways of working with the advent of AI, Seow added.
Another major challenge that has emerged with the advent of AI is deepfakes. There are umpteen instances of fraudsters and criminals tricking unsuspecting users with celebrity impersonation scams, leading to significant financial losses.
Deepfakes are also used by criminals to steal user accounts and send audio money requests using the account owner's voice to friends and relatives.
However, experts suggested that deepfake detection is technically not possible at present.
"...this is the future of research. It is a matter of time before you will see companies suggesting solutions that will at least try to tackle this (deepfake detection) problemI guess this is where the future of cyber security lies," Kamluk said.
In the current scenario of growing threats and attacks, organisations are suggested to aim for 100 per cent uptime to keep their businesses cyber-resilient.
In cyberspace, uptime is the duration a system is operational and resiliency refers to the company's response to a security breach by identifying, tackling and recovering from the incident.
During the annual Cybersecurity Weekend for Asia Pacific Countries 2024 held recently in Sri Lanka, Adrian Hia, Managing Director for the APAC region at Kaspersky, said that a company's system with 100 per cent uptime will result in business resiliency, both on-premise and on cloud.
Using AI, attackers are trying to reshape, reform and reshuffle malware to produce more variations based on the code, lowering the detection rate of malware for anti-virus.
Igor Kuznetsov, Director Global Research & Analysis Team at Kaspersky, said, "21 per cent of spam attacks are based on AI and it is becoming faster and potentially bigger. So, instead of focusing on offensive AI, one must improve the defensive AI.
As part of defensive AI, the company said it had detected more than 99 per cent of malware by automatic systems in 2023.
The balance between malware detection and hiding from antivirus "is still maintained and nobody is truly winning in this battle", Kamluk said.
With the pace at which these technologies are being incorporated into daily usage by attackers and defenders, cybersecurity experts are of the view to swiftly put in place regulations and ethics in the field of AI and GenAI.
"Regulations right now will have to play catch up," Seow said, adding that although, "it is hard to regulate because the movement has already happened".
Ethics is the foundation of humankind. "We need to pay more attention to the ethical education of people, especially among the newer generation. And with AI, it becomes even more important as it has a lot of potential," Kamluk said.
(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)