Community News Security

Chatbot Traps: How to Avoid Job Scams

Picture generated with Leonardo.ai

By Olga Spillane, Security Analyst at HEAnet

The digital job hunt may be convenient, but it also harbours hidden dangers. Cunning scammers, armed with cutting-edge Artificial Intelligence (AI) tools, are crafting ever-more believable fake job offers, conducting phony interviews, and even impersonating real companies. The growing use of chatbots by real recruiters makes it even harder to distinguish genuine interactions from scams.

Scammers often target job seekers to collect personal data, steal passwords, and exploit sensitive information. While people have become increasingly vigilant against more “traditional” phishing attempts, they are often far less alert to the types of scams outlined here. Whether you are a recent graduate or seasoned professional, getting through this tricky landscape takes a sharp eye and a good deal of caution.

The AI Advantage for Scammers

AI tools like ChatGPT or similar have propelled traditional job scams to new levels of sophistication. One significant change is the ability to generate polished, grammatically sound text in multiple languages. This eliminates the telltale typos and grammatical errors that once gave away a scam. AI-powered chatbots and text generators can now churn out highly convincing responses, making it harder to identify scams based on awkward phrasing or poor grammar.

Classic WhatsApp Scam Posted on Reddit vs ChatGPT Improved
Classic WhatsApp Scam Posted on Reddit vs ChatGPT Improved

Classic WhatsApp Scam Posted on Reddit vs ChatGPT Improved

Furthermore, AI allows scammers to automate and scale their operations. Unlike human con artists, AI chatbots can engage with hundreds of potential victims simultaneously, crafting personalized responses that mimic genuine human communication. These tools are particularly adept at adapting their approach based on the target’s replies, maintaining a seemingly professional and attentive facade.

However, despite these advancements, AI-generated content often lacks depth and specificity. While scammers using AI may provide polished responses, they might struggle with nuanced or highly technical questions. This is a critical weakness you can exploit by throwing some complex, company-specific questions their way.

The AI Arsenal of Job Scams (Generated with Leonardo.ai)
The AI Arsenal of Job Scams (Generated with Leonardo.ai)

The AI Arsenal of Job Scams

AI has breathed new life into phishing scams and fabricated job listings. Scammers can now mine data from legitimate sources to craft job offers that appear genuine but are entirely forged. These listings may mimic real companies, using slightly altered names, logos, or job descriptions to deceive victims into revealing sensitive personal information.

AI-driven phishing emails are also becoming increasingly personalized. By analysing your online behaviour, AI can create tailored emails that seem legitimate, using details typically known only to genuine recruiters. In some cases, scammers even leverage synthesized voices to leave fake voicemails from recruiters, further enhancing the illusion of authenticity.

Extracts from the Chatbot Generated Phishing Email to Job Seekers (The job advertisement was scraped from a legitimate job listing but directs to a typosquatted domain) 
Extracts from the Chatbot Generated Phishing Email to Job Seekers (The job advertisement was scraped from a legitimate job listing but directs to a typosquatted domain)

On social media, AI enables scammers to build highly convincing profiles for phony recruiters or companies. These profiles may post professional-looking content and interact with job seekers in a way that mirrors real recruitment processes, making it difficult to identify fraud at first glance. These profiles typically use real data from legitimate businesses, but careful scrutiny of their posts and connections can reveal inconsistencies.

Red Flags in AI-Enhanced Job Scams (Generated with Leonardo.ai)

Red Flags in AI-Enhanced Job Scams

While AI has upped the ante for scams, the core principles of job scams remain the same (think upfront fees, offers that sound too good to be true, and unconventional hiring procedures). Here are some key red flags specific to AI-enhanced scams:

  1. Perfectly Polished Communication with Limited Specificity: AI tools might churn out grammatically flawless communication, but these responses often lack specific details. Job descriptions may be vague, and follow-up questions may not elicit more clarity. If the text feels overly generic or scripted, be suspicious.
  2. Difficulty Handling Complex Queries: While AI can handle basic queries well, it often struggles with technical or nuanced questions. Scammers using AI may avoid or deflect more detailed inquiries, such as those related to salary structure, reporting hierarchy, or specific company policies.
  3. Consistency in Style but Lack of Emotional Nuance: AI-generated responses may lack the natural variability of human conversation. Communication might feel overly polished or formal, with little emotional engagement or subtle shifts in tone. If the conversation lacks the personal touch typical of human recruiters, it could be AI-generated.
  4. Instantaneous Responses to Complex Questions: AI tools like ChatGPT can generate responses almost immediately. While this might seem efficient, it’s atypical of genuine recruiters, who typically need time to gather information and provide thoughtful responses. Instant replies to complex or detailed questions may indicate AI involvement.
  5. Over-Reliance on Pre-Constructed Text: Scammers using AI might depend on pre-constructed responses or recycled content. If responses feel repetitive or fail to address the specific nuances of your questions, this is a red flag. Genuine recruiters will provide tailored answers that reflect the conversation’s context.
  6. Refusal to Engage in Real-Time Communication: Scammers often avoid phone or video interviews because AI tools struggle with real-time, unscripted interactions. While deepfake technology and AI voice generation are improving, there are still telltale signs of fraud, such as mismatched lip-syncing in videos or overly smooth, robotic speech in phone calls.
Defending Against AI-Powered Scams (Generated with Leonardo.ai)

Defending Against AI-Powered Scams

Despite the growing sophistication of AI scams, there are several effective strategies to protect yourself:

  1. Request Phone or Video Interviews: While scammers might use deepfake or AI voice technology, real-time human conversation – especially with unscripted, spontaneous dialogue – can reveal inconsistencies that AI struggles to mask.
  2. Ask for Detailed, Specific Information: Engage with specific questions about the role, company structure, and hiring process. AI-generated responses often fail to provide in-depth or context-specific answers, a critical weakness that can expose fraudulent communication.
  3. Research the Job and Company: Verify job listings and company details independently. Check official websites, job boards, and social media profiles to confirm the legitimacy of the role and the recruiter. Avoid relying solely on information provided during AI-generated conversations.
  4. Watch for Quick, Polished Responses: Real recruiters take time to evaluate applications, schedule interviews, and respond to inquiries. If you receive instant, perfectly crafted replies, this could be an indicator that AI is handling the communication.
  5. Test for Complexity: Ask industry-specific or highly technical questions. AI may struggle to provide accurate responses to complex queries, exposing its limitations in these contexts.
  6. Avoid Rushed Decisions: Scammers often create a false sense of urgency. Take your time to verify the legitimacy of the offer, seek advice from professionals, and carefully assess the information provided.
  7. Verify Recruiters Online: Investigate recruiters on professional platforms like LinkedIn or Glassdoor. Be cautious if their profiles show signs of fraudulent activity, such as low connection counts, recent profile creation, and sparse or inconsistent information.
Fake Recruiter on LinkedIn
  1. Use Safe Browsing Tools: If you encounter suspicious links, inspect them using secure browsing tools like Browserling, Urlscan or Any.Run. These tools allow you to verify the safety of links without putting your device at risk.
Browserling Main Page with a PDF File Link
Browserling Main Page with a PDF File Link
Viewing a PDF File Hosted Online through Browserling
Viewing a PDF File Hosted Online through Browserling
  1. Leverage AI for Protection: Use AI tools to your advantage. For example, Reverse Image Search can help you verify whether a recruiter’s photo is stolen, and AI text detection tools like GPTZero can reveal whether the messages you have received were generated by a chatbot.
Reverse Image Search of a Crop from the Fake Recruiter LinkedIn Page
Reverse Image Search of a Crop from the Fake Recruiter LinkedIn Page

Bottom Line

While the strategies outlined here can help you detect AI-powered scams, it is important to recognise that AI technology is advancing rapidly. Many current weaknesses—such as difficulties with complex questions or live conversations—may diminish as AI continues to improve. Staying informed about new developments in AI will be crucial for keeping up with evolving scam tactics. Always remain vigilant and update your protective measures as technology advances.


Resources

Tools mentioned in the article:

Images for the article generated with:


About the author

Olga Spillane is a cybersecurity professional with expertise in combating emerging digital threats. With a background in data analytics and several years of hands-on experience in a Security Operations Center (SOC), she brings a deep understanding of cybercrime tactics. Holding an MSc in Software Design with Cyber Security, Olga currently works at HEAnet, focusing on protecting educational institutions from sophisticated cyber threats. She is dedicated to educating others about safeguarding against AI-driven fraud, especially in the context of online job scams.

 


Also this year GÉANT joins the European Cyber Security Month, with the campaign ‘Your brain is the first line of defence‘. Read articles from cyber security experts within our community, watch the videos, and download campaign resources on connect.geant.org/csm24

About the author

Davina Luyten

Davina Luyten is communications officer at Belnet. She has a background in translation, journalism and multilingual corporate communication. At Belnet, she focuses on external communication, public relations, crisis communication and security awareness. She has participated in the GÉANT project since 2020, where her involvement includes the annual cyber security awareness campaign.

Skip to content