Human First AI: The End of Fake - AI Can Now Spot Your AI

TLDR:

LinkedIn deployed 360Brew in Q4 2025—a quality control system that detects and penalises lazy AI-generated content by up to 45%.

LinkedIn isn't anti-AI (Microsoft owns the platform). They're penalising a specific pattern: using AI to replace thinking rather than enhance it. The "Replacement" approach (prompt AI, post-output) introduces synthetic noise. The "Enhancement" approach (using AI to research more deeply, then applying human judgment) generates high-value insights.

For law firms: Shadow AI (staff using ChatGPT for client work) equals professional indemnity exposure. If a social media algorithm can spot fake AI patterns, so can the Solicitors Regulation Authority (SRA), opposing counsel and prospective clients. Fake is now provably dangerous. Quality control is coming to legal practice; 2026 PII renewals will demand answers. The SRA doesn't ban AI—it bans negligent reliance without oversight.

For consultants: Lazy AI use in communications, proposals, and client outputs can lead to a reputation collapse. When AI can draft "good enough" for £19/month, generic expertise is worthless. Deep Research (using AI to think better, not type faster) is your competitive moat.

The two paths forward: Lazy AI (Generate) = Templates, volume, liability. Deep Research (Think) = Forensics, insight, premium positioning.

The Quality Control Update LinkedIn really needed.

Six months ago, I wrote that LinkedIn wasn't broken—it was drowning in AI-generated content. I predicted platforms would deploy quality control. That authentic expertise would separate from algorithmic noise.

In October 2025, LinkedIn proved me right with a quality control update called 360Brew.

It's a 150-billion-parameter AI system that replaced thousands of fragmented algorithms. It reads your posts and profile together, measuring consistency, expertise and authenticity. And here's the part that changes everything: it detects and penalises lazy AI content. Visibility drops up to 45% when the system spots pattern-based, template-driven writing.

LinkedIn isn't anti-AI—Microsoft owns the platform and is deeply invested in AI, but it's always been anti-automation. What they're penalising is a specific pattern of use: Replacement rather than Enhancement. Using AI to replace your thinking process (prompting AI, then post-output) creates synthetic noise. Using AI to enhance your thinking process (research deeper, then apply judgment) creates high-value insight.

This isn't just an algorithm change—it's proof that fake is now dangerous.

For law firm leaders, AI-generated client advice, drafts and research briefs create professional indemnity risk. For consultants, AI-generated proposals, thought leadership, and pitches can trigger a reputation collapse. The distinction is no longer "AI versus human"—it's "Lazy AI versus Deep Research".

Path 1: Use AI to generate (templates, volume, speed) = Shadow AI liability. Path 2: Use AI to think (forensics, analysis, depth) = Deep Research advantage.

LinkedIn just put up a giant warning sign: fake doesn't scale anymore. Whether you're protecting your Practising Certificate or your consulting fees, the lesson is identical.

This article explains what just happened and what it means for high-stakes professional work in 2026.

The 2025 Chaos: What We All Lived Through

I've seen this story before. In February 2011, Google deployed an algorithm update called Panda. Overnight, hundreds of content mills collapsed. Websites that had built entire business models on keyword-stuffed, low-quality articles lost 50-90% of their traffic.

Google had deployed AI (yes, it existed well before ChatGPT came along in 2022) to detect patterns: shallow research, recycled structures, and content optimised for algorithms rather than humans. The businesses that survived were those creating genuine, expert content. The businesses that vanished were those gaming the system.

History just repeated itself on LinkedIn.

Throughout 2025, LinkedIn's algorithm felt chaotic. Posts that worked one week died the next. Reach became unpredictable. The data proved everyone's frustration was real. A Q3 2025 analysis of 318,842 posts found organic reach down 65% from its historical peak. Median impressions dropped 18% year-on-year. Average creators were growing at 20% slower than in Q2.

Company pages were hit hardest. In 2021, company content represented 7% of the average LinkedIn feed. By 2025, that figure had collapsed to 1-2%. Posts from company pages were reaching only 1.6% of followers—a 15% drop from late 2023.

Across the platform, strategists reported that "99% of creators I know" said LinkedIn felt unstable. One frequently cited observation was that the platform had become "faster-changing than I've ever seen," with creators struggling to keep up.

This wasn't paranoia. It was a fragmentation crisis.

Before 360Brew, LinkedIn relied on what engineers described as "thousands of specialised ranking models" with complex feature pipelines. Each algorithm operated independently: one ranked posts, another recommended jobs, and another suggested connections. The system couldn't assess content quality and relevance simultaneously. As AI-generated content flooded the platform, these fragmented systems couldn't cope.

The underlying problem I diagnosed in my July article was this: LinkedIn's algorithm wasn't broken—it was desperately trying to cope with an AI content flood. By 2025, estimates suggested over 50% of LinkedIn posts were AI-assisted. Platforms were inundated with generic, shallow content that users scrolled past without genuine engagement.

Steven Bartlett captured the crisis perfectly: "My feed actually feels lonelier because I don't feel like I'm talking to humans anymore."

That was the problem LinkedIn set out to solve.

LinkedIn's Solution: The 360Brew Quality Control System

360Brew is LinkedIn's answer to the quality control crisis. It's a decoder-only foundation model—similar in architecture to ChatGPT—built specifically for ranking and recommendations. The system was developed by LinkedIn's AI team throughout early 2025, disclosed in research papers by engineers including Vignesh Kothapalli and Wenzhe Shi, and rolled into production during October-November 2025.

Unlike the previous fragmented system, 360Brew operates as a unified intelligence. It handles more than 30 prediction tasks through a single text interface, without hand-engineered features. Think of it as LinkedIn replacing a committee of specialists with a single expert who understands the context.

Here's how it works differently from what came before.

360Brew reads your profile and your content together as natural language. It assesses topical consistency, clarity of expertise and authenticity of voice. It optimises for professional relevance, not just engagement. The system focuses on what LinkedIn engineers call "meaningful interactions"—thoughtful comments, genuine discussions, content that keeps people reading rather than scrolling.

The rollout was quiet. There was no official press release, no announcement of deployment dates. LinkedIn specialists began describing 360Brew as "launched" and "rolling out" in late October 2025, but the company left ambiguous whether the system was at 40%, 70% or 100% deployment across the feed. What's clear is that 360Brew is now the production ranking engine across multiple LinkedIn surfaces: feeds, job recommendations, content suggestions and search.

The behavioural changes are measurable and significant.

First, there's what LinkedIn engineers describe as the "lost in distance effect". The opening lines of your post carry disproportionate weight in how the algorithm assesses relevance. If your opening is weak or unclear, the post won't reach its full potential distribution, even if the rest is strong. This is a semantic reading system, not a keyword matcher. It's evaluating whether you front-load value, clarity and expertise.

Second, hashtags no longer function as a primary distribution lever. The algorithm reads the semantic meaning across your post and profile rather than matching specific tags. Multiple LinkedIn strategists now describe hashtags as a "secondary relevance signal"—useful but no longer a primary driver of reach. Topic consistency across your posting history matters far more than tactical hashtag use.

Third, saves signal long-term value. When users save your post, 360Brew interprets it as content worth revisiting. Saved posts remain visible in feeds longer than quick, reaction-driven content. The 2024-2025 algorithm reports identified saves as one of the strongest positive engagement signals, associated with evergreen posts that continue to resurface over time.

Fourth, credibility tracks behaviour patterns. The system doesn't evaluate posts in isolation. It examines the overall pattern of what you share and how consistently it aligns with your stated expertise. Your posting history now shapes how far future posts travel. Long-term consistency—demonstrating you are who your profile claims you are—builds algorithmic trust.

The trade-off everyone is experiencing is this: lower raw reach but more consistent, targeted visibility. Fewer viral spikes, more predictable performance within a defined audience. Some analysts dispute claims of a catastrophic 65% collapse, citing data indicating an 11-20% average decline rather than a platform-wide implosion. The more accurate framing is redistribution: 360Brew is concentrating reach among a smaller share of high-performing creators whilst down-ranking generic, inconsistent content.

For professional services—law firms, wealth managers, consultancies—this matters because credibility-dependent businesses need trust, not broadcast volume. Generic "thought leadership" platitudes are now actively suppressed. Niche-specific expert content continues to perform.

The Paradox: AI Detecting AI

Here's the part that makes 360Brew significant beyond LinkedIn: LinkedIn deployed sophisticated AI trained on member content to understand professional communication, and that same AI now detects and down-ranks AI-generated patterns.

The irony is deliberate. LinkedIn's quality control system is itself a generative-style foundation model. It learns from how experts explain concepts, structure posts, and engage across the platform. It reads posts as natural language, assessing clarity, originality and voice distinctiveness. Because it's a language model, it's exceptionally good at identifying repetitive, pattern-based text.

This is what 360Brew is detecting:

Templated phrasing and recycled structures. Posts that follow predictable AI formulas: "In 2026, the biggest challenge facing [industry] is..." or "Here are 5 ways to [generic outcome]..." The system measures lexical diversity. Content that feels copy-pasted from a template registers as low originality.

Generic hooks without substance. Opening lines designed to grab attention but containing no actual insight. "You're doing [common activity] wrong" or "Everything you know about [topic] is outdated." If the opening doesn't deliver on its promise, the post is flagged as engagement bait.

Engagement pod behaviour. Because 360Brew analyses comment patterns, it can detect when the same small circle of accounts repeatedly uses similar phrasing. If five people comment "Great insights!" using near-identical sentence structures within minutes of posting, the system recognises coordination rather than genuine response.

Obvious AI-generated carousels. Slide decks that follow rigid, predictable formats with no distinctive visual or narrative style. Multiple practitioners report that "templated" carousel posts saw sudden visibility drops after the 360Brew update.

The evidence for this detection capability comes from multiple sources. LinkedIn strategist Melonie Dodaro emphasises that posts that "follow predictable AI structures or phrasing" and feel templated rather than experience-driven are now "easy for the system to spot," leading to reduced visibility. AuthoredUp 's technical analysis explicitly states that 360Brew can "accurately detect engagement pods and recycled comment patterns" using lexical diversity metrics.

One widely shared creator warning from October 2025 notes that the algorithm "can now detect repetitive AI phrasing and pattern-based writing," claiming visibility can drop "by up to 45%" if such patterns are detected. Creator experiments described in strategy posts and Substack essays report that near-identical, ChatGPT-style generic posts underperform compared with more personal rewrites, and that rigid listicles and recycled hooks now see measurably worse results than rougher but more distinctive content.

LinkedIn's official position on AI-generated content is carefully worded. The platform's "Best practices for content created with the help of AI" page acknowledges AI-assisted content and recommends disclosure where appropriate, but focuses on ensuring originality, accuracy and adherence to professional norms rather than banning AI outright. Terms and data-use updates effective 3 November 2025 state that member content and activity will be used to train "content-generating AI models" by default.

So LinkedIn is training AI on your content whilst best-practice guides and strategist commentary warn that over-reliance on AI to generate that very content leads to lower visibility.

The lesson is not "don't use AI." It's "don't use AI lazily."

360Brew isn't doing literal AI-detector classification like tools that flag ChatGPT output. It's evaluating patterns: does this read like a human with specific expertise sharing lived experience, or does it read like a content mill optimised for volume?

The distinction matters enormously for law firms and consultants because if a social media algorithm can spot these patterns, you can be certain more consequential systems can as well.

The Critical Distinction: Enhancement vs. Replacement

Before we examine what 360Brew means for law firms and consultants specifically, we need to address a crucial point: LinkedIn is not anti-AI. Microsoft owns LinkedIn. They're not penalising AI use itself—they're penalising a specific pattern of AI use.

The distinction is between Enhancement and Replacement.

The Replacement Approach (Penalised by 360Brew)

This is using AI to replace the thinking process entirely.

The pattern looks like this: Open ChatGPT. Prompt: "Write me a LinkedIn post about leadership challenges in 2026." Copy the output. Post it. Perhaps you change a word or two, add your name, and maybe adjust the formatting. But fundamentally, you've replaced human judgment with AI generation.

The result is what 360Brew identifies as synthetic noise. The post reads professionally. It's grammatically correct. It might even be insightful in a generic sense. But it creates no dwell time because readers recognise the exact framing. The opening hook is predictable. The structure is templated. The conclusion is forgettable.

When hundreds of people do this simultaneously—and they are—feeds fill with content that looks professional but feels hollow. Users scroll past without engaging because nothing distinguishes one post from another. 360Brew measures this through engagement quality and time-on-post metrics, and it down-ranks accordingly.

This approach creates three problems:

First, low algorithmic performance. Visibility drops up to 45% because the system detects pattern-based writing and engagement metrics signal that users aren't finding value.

Second, reputational damage. When your content reads like everyone else's AI-generated content, you're signalling "I don't have distinctive expertise"—the opposite of what professional credibility requires.

Third, for regulated professions, liability exposure. If you're using AI to replace thinking in your marketing, the question becomes: are you using it to replace thinking in your client work? The pattern is identical, and the consequences are significantly more serious.

The Enhancement Approach (Rewarded by 360Brew)

This is using AI to deepen the thinking process whilst maintaining human judgment at the centre.

The pattern looks like this: You identify a topic you have genuine expertise in. You use AI to research counter-arguments, gather data you might have missed, identify patterns across multiple sources, or challenge your own assumptions. You use AI to structure complex information or test different ways of explaining a concept. Then write the final content in your own voice, incorporating your lived experience, your professional judgment, and insights that AI can't generate because they're grounded in context it lacks.

The result is high-value insight that creates dwell time. Readers save the post because it taught them a specific, actionable skill. The opening is distinctive because it's rooted in your experience. The structure serves the insight rather than following a template. The conclusion prompts genuine discussion because you've said something worth responding to.

When you do this consistently, 360Brew recognises expertise patterns: topical consistency, depth of insight, and engagement quality. The algorithm amplifies your content to the audience most likely to find it relevant.

This approach creates three advantages:

First, improved algorithmic performance. Saves and thoughtful comments signal long-term value. Your content stays visible longer and reaches more relevant prospects.

Second, reputation enhancement. When your content demonstrates forensic understanding of your niche, you're signalling genuine expertise. Prospects recognise the difference between AI-assisted insight and AI-generated noise.

Third, for regulated professions, defensible workflows. If you're using AI to enhance thinking in your marketing, you're demonstrating the same disciplined approach you apply to client work. The pattern builds trust rather than undermining it.

The Strategic Lesson Across Contexts

For law firms, this mirrors the SRA's view of AI adoption. The regulator doesn't ban the use of AI in legal practice. What it prohibits is negligent reliance on AI outputs without human oversight, verification and professional judgment. A solicitor who uses AI to research case law but then applies their expertise to assess relevance and draft advice is acting competently. A solicitor who prompts an AI system to draft client advice and then sends it without verification is creating a negligence risk.

360Brew applies the same quality-control logic to content: AI-assisted expertise is valued; AI-replaced expertise is penalised.

For consultants, this validates the Deep Research methodology. When you use AI to analyse a prospect's regulatory environment, competitive context, or visible challenges—and then synthesise that research into outreach that demonstrates forensic understanding—you're using Enhancement. The AI handles information gathering. You handle judgment about what matters and what insight the prospect isn't getting elsewhere.

When you prompt AI to "write a proposal for consulting services" and send the output, you're using Replacement. The prospect recognises it as generic because they see the same output from competitors using identical prompts.

The Objective Measure: Dwell Time

LinkedIn doesn't have to guess which approach you're using. The data reveals it. Content created through Enhancement generates longer dwell time, more saves, and more substantive comments. Content created through Replacement results in rapid scrolling, generic reactions, and low engagement.

360Brew optimises for dwell time and professional value because those metrics correlate with user satisfaction. The system isn't making moral judgments about AI use—it's measuring outcomes.

This is why "quality control" is the accurate framing rather than "AI detection." LinkedIn deployed 360Brew to surface valuable professional content and suppress noise, regardless of how that content was created. The fact that lazy AI use creates detectable noise patterns is a consequence of the behaviour, not the system's target.

The strategic takeaway is this: AI is a thinking tool, not a typing tool. Use it to conduct deeper research, analyse more comprehensively, and challenge your assumptions more rigorously. Then apply your professional judgment to determine what matters and how to communicate it.

That distinction—Enhancement versus Replacement—is what separates professionals who will thrive with AI from those who will be displaced by it.

What This Means for Law Firms: Quality Control Is Coming to Legal Practice

If LinkedIn's 360Brew can detect lazy AI patterns in professional content, the SRA can detect them in client files.

This is not speculation. It's inevitable, given what we now know, that quality control systems can achieve. LinkedIn deployed a 150-billion-parameter model to assess the quality of professional communication. Regulatory technology is advancing along similar lines. When PII underwriters and regulatory bodies start deploying comparable systems to assess risk, they will identify the same patterns 360Brew identifies: templated language, low lexical diversity, recycled structures.

The immediate risk for law firms is what I call Shadow AI: fee earners, trainees, and support staff using publicly available AI tools for client work without the firm's knowledge or governance framework.

Are fee earners using ChatGPT to draft client advice without your awareness? Are trainees using AI to structure witness statements or disclosure summaries? Are support staff uploading privileged documents to public AI platforms to summarise them faster? Every one of these actions creates professional indemnity exposure because the AI's output is unverified, the client hasn't consented, and the firm has no audit trail.

The 360Brew update proves that detection is not only possible but scalable. If a social media platform can analyse hundreds of millions of posts to identify pattern-based content, a regulatory review can analyse your client files to identify undisclosed AI use.

2026 PII renewals will be the inflexion point. Insurers are already asking AI-specific questions: Do you have an AI usage policy? Do you have governance frameworks? Do you have audit trails for AI-assisted work? Firms that answer "no" or "we're not sure" will face higher premiums or coverage exclusions. Firms with Shadow AI exposure that they haven't identified will underwrite fraudulent renewals.

The window to audit and remediate is right now, before renewal season.

This is why my work with law firm leaders focuses on the "Safe Start" audit. I don't teach firms how to use AI; I stop them from getting sued by using it badly. The 2026 AI Safe Start Audit identifies Shadow AI risks on staff phones and laptops before the SRA does. We map where undisclosed AI use is happening, assess the professional indemnity exposure, and build governance frameworks that make AI adoption SRA-compliant.

If you're a managing partner losing sleep over your Practising Certificate or your firm's PII renewal, this is the strategic priority. Quality control is here. The question is whether you deploy it internally before it's deployed against you.

What This Means for Consultants: Generic Expertise Is Now Worthless

If LinkedIn's 360Brew can detect templated, AI-generated thought leadership, your prospective clients can, too.

The uncomfortable truth consultants face in 2026 is this: AI can now draft "good enough" proposals, frameworks and content for £19 per month. If you sell "knowledge"—insights, strategies, best practices that can be articulated in documents—you're competing with tools your clients already have access to.

The 360Brew update is a warning shot. It demonstrates that pattern-based, template-driven content is not just detectable but actively penalised. On LinkedIn, that penalty is a 45% drop in visibility. In client relationships, the penalty is erosion of credibility and fee pressure.

When a prospect receives your proposal, and it reads as it came from ChatGPT—generic structure, recycled insights, predictable phrasing—they recognise it. Not because they're running AI detection software, but because they've seen that same output from the AI tools they're using themselves. Your expertise looks commoditised because it is commoditised. You've used Replacement (AI replacing your thinking) rather than Enhancement (AI deepening your thinking).

This is the crisis facing consultants, coaches, and solo advisors who built their practices on "thought leadership": content volume doesn't translate into business results. It never did, but now it's provably counterproductive. The LinkedIn algorithm is measuring what prospects have been feeling: Replacement creates noise; Enhancement creates value.

The escape route is Deep Research, which is fundamentally an Enhancement methodology.

Deep Research is the methodology I developed to win high-ticket law firm clients without content volume, without spam, without the "post three times a week" hamster wheel. It's forensic outreach: 2-5 strategically researched messages per week that demonstrate distinctive insight rather than generic capability.

The method works because it uses AI to think, not to generate. I use Claude and Perplexity to analyse a prospect's specific situation—their regulatory context, market positioning, and visible challenges—and synthesise that research into messages that demonstrate I understand their world better than they expect. The AI handles data gathering and pattern recognition. I handle judgment: what matters, what's at stake, what insight they're not getting elsewhere.

This approach directly mirrors what 360Brew rewards: depth over frequency, specificity over templates, expertise you can't fake. When my outreach demonstrates forensic-level understanding of a law firm's SRA compliance gaps or PII exposure, the recipient recognises it as genuine expertise rather than marketing automation.

The same distinction applies to your LinkedIn presence. Posts that teach something specific—"Here's what the November 2025 PII terms update means for AI-assisted drafting"—stay visible because they're reference-worthy. Posts that offer generic motivation—"5 ways to future-proof your business"—get suppressed because they're indistinguishable from AI slop.

360Brew is sorting consultants into two categories: those who use AI to think deeper, and those who use AI to produce more. The first category will command premium fees in 2026. The second category will compete with £19/month software.

The Two Paths Forward: Replacement vs Enhancement

Every professional services firm now faces a choice about how they use AI. The 360Brew update makes the consequences of that choice measurable and immediate.

Path 1: Replacement (Lazy AI)

This is the path most firms defaulted to in 2024-2025 because it felt efficient. Use AI to replace the thinking process: prompt it to produce drafts, posts, proposals, and client communications, then post or send them with minimal editing. The logic was simple: if AI can generate "good enough" output in seconds, why spend hours crafting bespoke content?

The problem is that "Replacement" creates three forms of risk.

For law firms, Replacement means Shadow AI liability. Fee earners using ChatGPT to draft client advice, trainees using AI for legal research without verification, and support staff using public AI tools to summarise privileged documents. Every instance creates professional indemnity exposure because the output is unverified, the client hasn't consented, and the firm has no governance framework. When something goes wrong—and statistically, it will—the firm can't demonstrate it took reasonable steps to prevent the harm.

For consultants, Replacement means reputation collapse. When your proposals, thought leadership, and client communications read as if they came from a template, prospects recognise it. They're using the same £19/month tools you're using. Your expertise looks generic because it is generic. The immediate consequence is fee pressure: if AI can produce your insights, why pay your rates?

For both, Replacement triggers 360Brew penalties. Templated posts, recycled structures, pattern-based writing—visibility drops up to 45%. Your LinkedIn presence, which exists to build credibility, actively undermines credibility by signalling you're producing volume rather than insight.

Path 1 is a race to the bottom. It optimises for speed and volume whilst eroding trust, liability protection and professional differentiation.

Path 2: Enhancement (Deep Research)

This is the path that treats AI as an analytical partner rather than a thinking replacement. Use AI to gather information, identify patterns, synthesise research, challenge your assumptions—then apply human judgment to determine what matters, what's at stake, and what insight the client isn't getting elsewhere.

For law firms, Enhancement means AI workflows that are governed. Fee earners use AI for legal research, with verification protocols in place. Partners use AI to analyse patterns in case law but draft advice themselves. Support functions use AI for document review within closed, auditable systems. Every AI interaction has governance: consent, audit trails, and human oversight. This isn't slower than Shadow AI—it's strategically faster because it's defensible. You're using AI to enhance solicitor capacity, not replace solicitor judgment.

For consultants, Enhancement means distinctive positioning. Use AI to analyse a prospect's specific context—their regulatory environment, competitive positioning, and visible challenges—then synthesise that research into outreach that demonstrates a forensic understanding. The AI handles information gathering. You handle judgment. The result is expertise that can't be faked because it's grounded in depth rather than templates.

For both, Enhancement means 360Brew rewards. Posts that teach specific, useful insights stay visible longer because they're saved and referenced. Content that demonstrates genuine expertise rather than generic capability builds credibility. Your LinkedIn presence becomes an asset rather than a liability.

Path 2 is a race to the top. It uses AI to do better, deeper work rather than more, faster work. It optimises for trust, defensibility and premium positioning.

The choice seems obvious when framed this way. Yet most firms remain on Path 1 because they haven't recognised the shift. 360Brew is the warning: quality control is here, detection is possible, and Replacement is now measurably dangerous, whilst Enhancement is measurably rewarded.

Why January 2026 Is Your Window

360Brew rolled out in Q4 2025. Most firms are only now noticing the impact. New year strategic planning cycles create a natural inflexion point. Those who understand that quality control is here—not coming, but here—get first-mover advantage.

For law firms, timing is critical as 2026 PII renewals approach. Shadow AI use that happened in 2024-2025 hasn't been detected yet, but that doesn't mean it won't be. If LinkedIn's AI can spot patterns in public posts, compliance review technology can spot patterns in client files. The question PII underwriters will ask is: "Do you have governance frameworks for AI use?" Firms that answer "no" or "we're not sure" will face premium increases or coverage exclusions.

The window to audit and remediate is right now. Before renewal applications go in. Before the SRA deploys comparable detection systems to what LinkedIn just demonstrated is possible. The 360Brew update proves that pattern-based content is detectable at scale. Regulatory technology will follow.

For consultants, timing matters: 2026 is when fee pressure becomes undeniable. Clients have spent a year experimenting with AI tools. They've discovered that AI can draft "good enough" proposals, frameworks and reports for £19/month. The consultants who survive are those who demonstrate they're offering something AI can't replicate: judgment shaped by experience, forensic analysis of client-specific contexts, and insights that can't be templated.

Your 2025 LinkedIn performance doesn't define 2026 visibility. 360Brew evaluates current posting patterns, not historical metrics. If you had limited reach in 2025 because you were using templated AI content, you can reset using the Deep Research methodology I developed. The algorithm rewards niche expertise, authentic voice and forensic insight. Generic "thought leadership" users are now handicapped. Distinctive experts are surfaced.

The psychological shift required is this: stop thinking "How do I use AI to do more, faster?" and start thinking "How do I use AI to do better, deeper?"

Stop asking: "Can AI write this for me?" Start asking: "Can AI help me think about this more rigorously?"

For law firms, that means AI for research and analysis within governed workflows (Enhancement) rather than AI for drafting and advice without oversight (Replacement). For consultants, that means AI for insight discovery and pattern recognition (Enhancement) rather than AI for proposal generation and content volume (Replacement).

This is not a crisis. It's a clarity moment. The noise is being filtered out. Genuine expertise is being surfaced. Those who treat AI as quality Enhancement—ensuring their work is deeper and more defensible—will dominate their niches in 2026.

The Quality Control Lesson

LinkedIn implemented quality controls because lazy AI-generated content was eroding user trust. The platform detected a problem: feeds full of templated, pattern-based professional content that looked credible but felt hollow.

Their solution was sophisticated AI that can spot lazy AI use patterns—not to ban the technology, but to enforce quality standards. Microsoft owns LinkedIn. They're not anti-AI. They're anti-noise.

The distinction they're enforcing is Enhancement versus Replacement:

  • Replacement: Using AI to replace thinking (prompt and post) = synthetic noise, low dwell time, 45% visibility penalty

  • Enhancement: Using AI to deepen thinking (research, analyse, then apply judgment) = high-value insight, saved content, algorithmic rewards

This is the lesson for high-stakes professional work: if a social media algorithm can detect the difference between AI-enhanced expertise and AI-replaced expertise, you can be certain the SRA can detect it in client files, opposing counsel can detect it in disclosure, prospective clients can detect it in proposals, and insurance underwriters can detect it in risk assessments.

Fake doesn't scale. It never did. Now it's provably detectable.

The two paths in 2026 are these:

Path 1: Lazy AI (Replacement) — Templates, volume, speed. ChatGPT writes it, you post it. Law firms face Shadow AI liability exposure because the same pattern (AI without oversight) creates negligence risk in client work. Consultants face commoditised fees because prospects recognise generic AI output. LinkedIn penalises this by up to 45% because it creates no user value.

Path 2: Deep Research (Enhancement) — Forensics, analysis, synthesis. AI handles research; you apply judgment. Law firms achieve SRA-compliant capacity by enhancing solicitors' expertise rather than replacing it. Consultants achieve premium positioning because AI-assisted forensic insight can't be commoditised. LinkedIn rewards this with targeted, lasting visibility because it creates genuine value.

The 360Brew update is a warning shot. Whether you're protecting your Practising Certificate or your consulting reputation, the message is identical: quality control is here, and it's non-negotiable.

The businesses that treat AI as a thinking partner—not a typing replacement—will dominate 2026.

About Richard

I'm Richard — a Strategic AI Adviser based in Cheshire. I provide Strategic Counsel on AI Governance, Adoption and SRA-compliant capacity for Law Firm Leaders.

I've spent 35 years in business, including 25 years in marketing and nearly three years working intensively with generative AI (I'm a CPD-certified AI Trainer).

My "Human First AI" adoption framework focuses on the principle that artificial intelligence should amplify human strengths, not replace them. It's about people, trust, and clarity at the heart of innovation.

I develop these insights using Claude as my strategic thinking partner, applying the Human First AI Adoption Framework I've refined through work with dozens of professional services practices. The research, analysis, and strategic frameworks are mine; Claude helps me structure and articulate them clearly.

Part of the Human First AI Professional Services series

Previous
Previous

Human First AI: Why Renewing Your PI in 2026 Will Be Harder Than You Think

Next
Next

Human First AI: AI Needs You More Than You Realise