<img src="https://www.visionary-agile24.com/801599.png" style="display:none;">

Deepfakes, voice clones, and AI-written lures

by Aaron Flack on Oct 17, 2025

<span id="hs_cos_wrapper_name" class="hs_cos_wrapper hs_cos_wrapper_meta_field hs_cos_wrapper_type_text" style="" data-hs-cos-general-type="meta_field" data-hs-cos-type="text" >Deepfakes, voice clones, and AI-written lures</span>

Deepfakes, voice clones, and AI-written lures
10:26

European Cybersecurity Month highlights social engineering for good reason. Attacks now target people more than code. Even a cloned voice or a routine approval process can lead to failures, despite having good controls in place. These issues can be avoided only if leadership recognises social engineering as a business risk that involves people, processes, and technology rather than viewing it solely as an information technology problem.

Attackers now use generative AI to imitate leaders and speed through governance. The realism gap has closed, which means traditional tone checks and spelling-mistake heuristics no longer protect executive workflows. ENISA’s latest Threat Landscape report states that by early 2025, AI-supported phishing accounted for more than 80 per cent of observed social engineering activity, reflecting a step-change in both scale and believability.

UK signals point in the same direction. The NCSC Annual Review 2025 reports record incident volumes and warns that AI is already amplifying attacker capability, which raises pressure on leadership teams to tighten controls around approvals, payments, and supplier changes. Ofcom’s deepfake research shows that nearly half of UK respondents believe they have encountered deepfakes in the last six months, indicating an environment where staff confidence in spotting fakery is low and exposure is diffuse. 

Boards cannot out-tech this problem with tools alone. What works is disciplined verification, rehearsed behaviours, and crisp decision-rights. The guidance below is designed for directors who want proof, not platitudes.

The three attack patterns now targeting leadership teams

1) Voice clone approvals by phone or Teams

Attackers capture short public audio, build a convincing voice clone, then ring finance approvers or PAs with urgent instructions linked to time-sensitive transactions or confidential deals. UK media and professional bodies have documented real-world losses, including the UAE case, where AI-assisted voice cloning helped push a 35 million dollar transfer, plus multiple UK warnings to finance leaders about rapid voice clone scams.

What changes the risk calculus is speed. A 15-minute clone from open sources is enough to pass casual checks. Tests by UK journalists and security practitioners have demonstrated just how quickly a believable clone can be created and used to authorise payments.

2) Deepfake video that manufactures social proof

Fraudsters now run full video calls with synthetic participants. In 2024, a Hong Kong finance worker at the UK firm Arup was duped into wiring roughly 20 million pounds after a group video meeting in which every colleague and leader on screen turned out to be fake. This incident has been publicly confirmed and repeatedly analysed as a watershed moment in executive social engineering.

3) AI-written email and chat lures that match executive tone

Large language models generate emails that pass tone, grammar, and domain-specific vocabulary checks. ENISA’s finance sector analysis highlights how social engineering generates direct financial losses and fuels Business Email Compromise, with adversaries chaining multiple channels such as email, adversary-in-the-middle pages, and follow-on voice contact.

Board-level tells and verification moves.

The aim is not to teach directors to spot pixels or audio artefacts. The aim is to standardise friction where it matters and make it socially acceptable within the culture.

For voice and video

  • Background cadence: cloned voices often lack the natural micro-disruptions of real calls, such as breaths, overlaps, and off-topic acknowledgements. A legitimate speaker will interrupt themselves, ask for confirmation, and reference context unprompted.

  • Latency patterns: deepfake video calls often show a consistent micro-lag between lip movement and audio, especially when multiple synthetic participants speak in turn.

  • Request scope: attackers push for deviations from standard payment, HR, or supplier uplift policies, usually with confidentiality as a pretext. Treat confidentiality as a flag to increase checks, not reduce them.

  • Verify out-of-band using pre-registered channels: require a second contact that is never shared in email footers or public pages. No exceptions, including Chair or CEO.

  • Use a challenge phrase for authorisations: a short phrase known to approver pairs, rotated quarterly. No tool required, only discipline.

These cues align with NCSC advice on preserving integrity in the age of generative AI, which emphasises verifiable trust signals over eyeballing synthetic media quality.

For emails and chats

  • Structure mirrors internal templates a little too well: AI trained on previous communications will mimic headers, salutations, and sign-offs. Treat perfect mimicry during unusual requests as suspicious.

  • Cross-channel escalation: email request followed by a phone call from a recognisable voice, or a Teams ping that repeats the instruction. This is a feature, not a coincidence.

  • Supplier banking change with time pressure: force a callback to an already recorded number and a two-person sign-off. No callback, no change.

Governance fit, mapped to common Board artefacts

  • ISO 27001 culture and awareness controls: direct the CISO to evidence how awareness training covers synthetic media, voice clone verification, and executive workflows such as urgent supplier uplifts or accelerated M&A activity. Require proof of rehearsal, not just slideware.

  • ISO 27001 supplier and change management controls: mandate a formal test of supplier bank detail changes and executive delegation rules under simulated pressure.

  • NCSC Board Toolkit prompts: adapt the Toolkit questions to include synthetic media scenarios and require clear answers on verification steps, residual risk, and incident communications.

  • Companies Act director duties in practice: effective oversight requires the Board to challenge whether payment controls and delegations work under stress, not just on paper.

What recent cases teach in plain language

  • Arup video call fraud, early 2024: confirms that group deepfakes can defeat social proof inside a meeting. Every participant appeared genuine. Lesson: treat video presence as untrusted unless verified out-of-band for approvals.

  • Celebrity deepfake ads targeting UK audiences, 2025: national coverage shows criminals using AI-generated video to drive people to fraudulent investment schemes, including repeated misuse of Martin Lewis. Lesson: the misappropriation of reputational capital at scale normalises the use of deepfakes, desensitising staff and weakening internal scepticism.

  • Public awareness gap, Ofcom 2024: nearly half of respondents report exposure to deepfakes in a six-month window, with concern rising and confidence to detect remaining low. Lesson: assume many staff cannot reliably spot fakes without process-based checks.

  • UK strategic picture, NCSC 2025: incident volumes and hostile capability are up, AI is compounding attacker reach. Lesson: Boards should expect more frequent social engineering that blends text, voice, and video.

Actions the Board can take in the next 30 days

Put it on the agenda

  • Approve a two-step verification policy for executive approvals: any out-of-band instruction that moves money, alters banking, changes supplier standing, or releases sensitive data requires a second channel confirmation using pre-registered numbers, plus a short challenge phrase. Chair included.

  • Direct a rehearsal cycle: schedule a live simulation that hits finance authorisers, PAs, and named executives with safe voice clones, AI-written emails, and a mock Teams call. Require measurable outcomes and a Board-level heat map.

  • Tighten the delegation and time-pressure rules: if a request cites secrecy, travel, or crisis, the bar for verification rises. Write this into the policy.

  • Mandate change control for supplier banking details: zero acceptance of email-only or document-only changes. Callback to a recorded number, cross-checked in the supplier master, plus dual approval.

  • Require incident communications lines for executives: pre-approved wording for stopping suspicious calls or ending a meeting politely, for example, “This instruction triggers our dual-verification rule, will call you back on your registered number.”

Insert clear lines into policy.

  • No approvals over a single channel, including video.

  • No payment based on screenshots or forwarded chains.

  • No confidential deal can bypass verification rules.

  • Any breach of the rule is an incident, not a favour.

Coach the culture

  • Make friction socially safe: leaders should praise staff who slow down payments to verify.

  • Normalise the callback: treat it professionally, not suspiciously.

  • Rotate challenge phrases: quarterly rotation, logged by Risk.

How to spot common deepfake and voice clone tells

Visual cues in video calls

  • Slight blur around hair or glasses that persists when the subject moves.

  • Lighting that does not change naturally when the person shifts position.

  • Lip movements that remain on a fixed rhythm, especially when answers are short and repeated.

  • Participants who refuse to switch cameras, share screens, or repeat a unique phrase.

Audio cues in calls and voicenotes

  • Compressed background with no room tone at all, even on speakerphone.

  • Answers that arrive at identical intervals after prompts, which hints at playback rather than real cognition.

  • Unusual insistence on continuing the call when a callback is proposed.

  • Over-precise repetition of names, amounts, or dates in a way that feels scripted.

Behavioural cues across channels

  • A sudden switch to WhatsApp or a personal email for a corporate matter.

  • A request to keep a transaction secret from normal approvers.

  • Pre-populated documents that appear perfect yet drive urgency to sign or pay.

If any of these cues appear, directors should immediately use the callback rule. Ofcom’s analysis and the NCSC’s integrity guidance both support moving away from surface-level media checks toward process-level verification.

Cybersecurity Month, each October, creates a natural moment for a focused exercise. This year’s commentary across professional and legal channels emphasises a simple behavioural upgrade: do not believe everything you see or hear without an independent check. Use the month to test the executive workflows that move money and reputation.

Speak to a Conosco expert about deepfake and voice-cloning risks, and get a pragmatic plan the Board can act on. The session focuses on your high-value workflows, maps likely attack paths, and outlines verification steps that directors and authorisers can use immediately. Guidance is aligned to ISO 27001 control families and the NCSC Board Toolkit, with clear ownership and measurement. No tooling pitch, only advisory clarity and evidence. Please consult an expert on deepfakes to secure executive approvals before they are tested in the wild.