Back in January, we sketched nine forces we said would shape cybersecurity through 2025. Seven months of field data are in. Some calls landed almost on the nose, others misfired, and a few are still warming up. Below is the expanded scorecard on how our predictions are stacking up.
For details on how we came up with these scores, we have detailed our algorithm at the end of this blog post.
Score 8 out of 10
Generative AI has shifted from experiment to production in criminal circles. CrowdStrike’s 2025 Threat Hunting Report tracks adversaries using large language models to automate reconnaissance, draft spear-phish lures, and even write polymorphic loader code.
Netscout analysts warn that chatbot-style assistants such as WormGPT are now planning multi-vector distributed denial of service runs for non-specialist crews.
Why that score? We predicted rapid growth, and we got it, but the volume is still uneven across sectors.
What happens next? Expect off-the-shelf toolkits to become subscription services that bundle exploit research, social engineering scripts, and GPU time.
Score 7 out of 10
Gartner’s 2025 Hype Cycle lists Agentic AI security as peak publicity, yet buyers are already trimming pilots that lack measurable return.
Forrester’s Q2 2025 Wave on security analytics shows ten vendors claiming AI differentiation, but only four demonstrate verifiable threat detection uplifts.
Why that score? The gold rush is real, though consolidation is underway.
What happens next? Boards will value AI that integrates with existing telemetry over brand-new consoles. Look for mergers rather than green-field launches in H2.
Score 9 out of 10
The Cyber Governance Code of Practice, published in April, pushes legal accountability onto directors for the first time.
Industry lawyers now treat cyber drills the same way they treat fire drills: miss one and explain it in public.
Why that score? Mandatory guidance beats voluntary frameworks.
What happens next? Expect auditors to require evidence of tabletop exercises and live incident telemetry, not annual PDF reports.
Score 7 out of 10
Check Point’s Q2 2025 Brand Phishing Report shows Microsoft as the bait in twenty-five per cent of phishing attempts, with Google and Apple next in line.
Attackers now misuse link-wrapping features of secure email gateways to sneak through trusted domains and loot Microsoft 365 credentials.
Why that score? Headline volume matches the call, though smaller brands remain under-reported.
What happens next? Real-time domain spoofing monitors and automatic DMARC enforcement will replace monthly summary charts.
Score 8 out of 10
A global CIO Pulse finds ninety-six per cent of organisations favour the model and eighty-one per cent plan to deploy within twelve months.
Forrester’s July Wave on zero trust platforms highlights buyers demanding a unified policy rather than single-point products.
Why that score? Commitment is high, though completion is patchy.
What happens next? Attention shifts between network segmentation and identity context, with continuous posture scoring across cloud estates.
Score 9 out of 10
Zscaler blocked a one-hundred-forty-six per cent year-on-year spike in attempts, the sharpest rise in three years.
IBM’s latest Cost of a Data Breach report notes that one-sixth of breaches now involve generative AI tools that cut phishing prep from sixteen hours to five minutes
Why that score? Volume and technique escalation both confirmed.
What happens next? Operators pivot from encryption to pure exfiltration, squeezing victims through leak sites and regulatory exposure.
Score 5 out of 10
Bitsight finds forty-three per cent of UK firms run continuous third-party monitoring, yet only twenty per cent class their posture as very mature IT Pro. New NIS2 obligations complicate compliance and raise stress levels.
Why that score? Awareness rose, but action lags.
What happens next? Regulators are drafting minimum-viability controls that will hard-wire supplier attestations into every contract.
Score 8 out of 10
IBM pegs the global average breach at four point nine million US dollars, up ten per cent, with shadow AI incidents adding six hundred-seventy thousand dollars on top
Why that score? Financial impact met the upper end of our range.
What happens next? With payment bans looming, legal fees and shareholder litigation will outpace ransom demands in cost models.
Score 6 out of 10
Marsh reports softer premiums and broader wording, yet S&P Global estimates only five to ten per cent of UK small and medium enterprises carry a standalone policy
An ESET study puts the figure at eight per cent for firms with no external security support.
Why that score? Pricing is attractive, but adoption is not.
What happens next? Underwriters will link discounts to zero trust deployment and continuous control validation, raising the bar for cover.
AI governance bites
The EU AI Act introduces transparency and watermarking duties for general-purpose models on 2nd August 2025. Boards must budget for red-team simulations that probe model drift and copyright leakage.
Post-quantum readiness moves to the board agenda.
The NCSC timetable expects firms to map cryptography by 2028 and complete migration by 2035. NIST has already accepted FIPS 140-3 modules that include post-quantum primitives, signalling market readiness
The UK inches toward a ransomware payment ban
The Home Office consultation proposes outlawing ransom payments for the public sector and critical national infrastructure while forcing private firms to report intent. Even a partial ban will shift negotiation dynamics and elevate the legal risk of paying.
The first half of 2025 proved that speed now trumps size in cyber risk. Attackers iterate faster, regulators chase harder, and defenders who match the tempo turn disruption into a competitive edge. Re-check every plan monthly and treat your security budget as a living instrument. Predicting is enjoyable, adjusting in flight keeps you in business.
1. Anchor points
10 means the prediction happened materially as stated, within the expected time frame, and across most of the market segments we care about.
5 marks a partial hit: the trend is visible but patchy or slower than forecast.
1 means the call is clearly off track or has yet to surface.
Anchoring the endpoints before gathering evidence stops retrospective bias.
2. Four evidence pillars were used of equal weight
Pillar | Question asked | Typical sources |
---|---|---|
Magnitude | Has the shift reached critical mass? | Industry incident data, vendor telemetry |
Velocity | Is the speed in line with the forecast window? | Quarter-on-quarter or year-on-year deltas |
Breadth | How many sectors or geographies show impact? | Analyst surveys, regulatory filings |
Salience | Is the change visible to executives and the press? |
Mainstream media, government policy papers |
Each pillar earns up to 2.5 points. Summed, they produce the 10-point scale.
3. Evidence sweep
For every prediction we pulled:
Two analyst or regulator documents (Forrester, Gartner, NCSC, EU AI Act drafts).
One incident or telemetry dataset (CrowdStrike, Netscout, Zscaler).
One mainstream or trade-press confirmation.
Sources were cross-checked for date alignment and sector relevance to UK mid-enterprise readers.
4. Scoring workflow
We read the evidence and logged quantitative markers: percentage growth, adoption rates, and financial impact.
We assigned each pillar a provisional 0‒2.5 value.
We stress-tested the provisional score against contradictory data. Adjustments downward were made if the evidence was thin or localised.
We rounded to the nearest whole number for simplicity in the article.
Example: AI-powered attacks
Magnitude: 2.0 (attacks up sharply, confirmed by multiple vendors).
Velocity: 2.0 (growth exceeded year-on-year forecast).
Breadth: 1.5 (heaviest in finance and public sector, less in manufacturing).
Salience: 2.5 (regular headlines, parliamentary questions).
Total = 8.0, rounded to 8.
Why not use formal forecast metrics like Brier scores?
The original January calls were qualitative, not probabilistic, so a calibrated Brier or log loss would be artificial. A transparent pillar model is easier for non-statistical readers to audit and replicate.
Governance checks
Every score links back to at least three independent references
No single vendor or media outlet can swing a pillar by more than 0.5.
Scores will be revisited at year-end to see how the pillar method holds up.
Accuracy scores are derived from publicly available data and Conosco’s analyst interpretation. Different weightings may yield different results.
Provider/company | Document or page title | Link |
---|---|---|
CrowdStrike |
2025 Threat Hunting Report |
|
Netscout (coverage in IT Pro) |
“Think DDoS attacks are bad now? Wait until hackers start using AI assistants …” |
(IT Pro) |
Gartner |
2025 Hype Cycle for Artificial Intelligence |
(Gartner) |
Forrester |
The Forrester Wave: Security Analytics Platforms, Q2 2025 |
|
UK Government (DSIT) |
Cyber Governance Code of Practice (April 2025) |
(GOV.UK) |
Check Point Research |
Phishing Trends Q2 2025 report |
|
CIO.com |
“Why 81% of organisations plan to adopt Zero Trust by 2026” |
(CIO) |
Forrester |
The Forrester Wave: Zero Trust Platforms, Q3 2025 |
|
Zscaler |
2025 Ransomware Report (146% spike) |
|
IBM Security |
Cost of a Data Breach Report 2025 |
(IBM) |
Bitsight |
2025 State of Cyber Risk and Exposure (UK spotlight) |
(IT Pro) |
Marsh |
“Five trends in UK cyber insurance in Q1 2025” |
(Marsh) |
S&P Global (reported by Reinsurance News) |
“Cyber insurance premiums stabilise in 2025, but market penetration remains below 10% for SMEs” |
|
ESET (reported by TechRadar Pro) |
“Only 8% of UK firms carry standalone cyber insurance” |
|
European Commission |
EU AI Act: general-purpose AI Code of Practice (August 2025) |
(IT Pro) |
NCSC |
Post-quantum cryptography migration roadmap (March 2025) |
(NCSC) |
NIST |
“NIST releases first finalised post-quantum encryption standards” |
(nist.gov) |
UK Home Office |
Ransomware legislative consultation (January–April 2025) |
(GOV.UK) |