Me: “Can you be misused for cyber-attacks?”
Chat GPT: “As an AI language model, I don’t have agency or intentions of my own. However, like any technology, I can potentially be misused by individuals with malicious intent.
Ultimately, the responsibility for the use and potential misuse of AI systems lies with the users. It’s important for individuals and organizations to use AI technology ethically and responsibly, adhering to legal and ethical guidelines, and taking precautions to prevent any harm or misuse.”
In this conversation, ChatGPT is not wrong…
Generative artificial intelligence (AI) tools such as Chat GPT and Google Bard are gaining popularity all over the world. These tools allow their users to boost efficiency and productivity in their daily tasks and responsibilities.
However, great capabilities can be misused. Soon after they have been introduced, generative AI tools has been used to create malwares, many of which appeared in the official app stores. Generative AI has been used for creating emails and messages for AI-powered phishing campaigns, and an AI generated video on YouTube which was found to be malicious.
Unfortunately, generative AI companies are not handling these issues. In a recent report, the Check Point Research team found that Bard (google’s generative AI tool) imposes almost no restrictions on the creation of phishing emails and that with minimal manipulations, it can be utilized to develop malware keyloggers, which poses a security concern.
For these reasons, some companies have decided to ban the use of AI tools on corporate devices or networks, and other companies who permit employee access to generative AI tools, have ended up as victims of sensitive data leakage.
Therefore, until proper measures are available to protect sensitive data from leaking and internal information from being stolen, companies should pay extra-care in their security policies.
Mobile devices are no exception. In fact, mobile users are more likely to download a malicious app or fall prey to phishing attempts. Mobile devices smaller screens, large quantities applications and notifications can distract users making it harder for them to avoid clicking on malicious links or downloading malicious files. For many of us, social networks are the most used apps on our mobile device, which means mobile users are more exposed to social engineering and phishing.
Moreover, the blurred lines between personal and professional use of mobile devices makes them a main entry point into an organization, so protection against AI threats should be a top priority.
What do you need to do to protect your mobile fleet against AI threats?
• The most important thing to realize is that mobile device users cannot be trusted. Attacks keep increasing in amount and sophistication and are now too advanced and complicated to be detected by a human being. A classic example are phishing sites that have evolved so much, that they look the same as the original site. In order to deal with this level of sophistication, you need advanced technology to protect the device and prevent threats from entering your organization.
• Mobile devices are an entry point to your organization. With AI evolving so quickly, detection and remediation is not enough. Your mobile security solution must include preventative capabilities to stop the threat on the device level before gaining any access to your corporate assets.
Generative AI learning capabilities are overwhelmingly impressive. To keep up, you need to use AI to fight AI. Make sure your security solution uses AI and machine learning technology to maintain the upper hand.