Getting AI Wrong: The Real Risk Isn’t Technology

November 05, 20255 min read

We’re past the point of asking whether artificial intelligence will transform professional services. It already has. The question now is whether your organisation will shape that transformation intentionally, or whether it will shape you.

This isn’t hyperbole. Redundancies are creeping across both private and public sectors in Australia. Telstra has publicly stated its workforce will be smaller within five years. Westpac, Bank of Queensland, and Microsoft have each cut hundreds of positions this year, with AI explicitly cited as a contributing factor.

These local developments mirror global projections. Amazon recently unveiled plans to eliminate roughly 14,000 corporate jobs, while Anthropic CEO Dario Amodei warns that AI could displace half of entry-level white-collar roles within five years, pushing unemployment toward 10 to 20 percent. Australian organisations are already demonstrating this trajectory. Every knowledge-intensive industry faces the same inflection point: AI adoption is no longer a competitive advantage. It is table stakes.

Yet most organisations are getting AI wrong. They are selecting tools before defining strategy. They are automating broken processes. They are treating AI adoption as a technology project rather than an organisational metamorphosis.


The Duality Nobody Talks About

Artificial intelligence presents a fundamental duality. At the business level, it is both the greatest threat and opportunity in a century. At the personal level, it extends our capabilities while amputating others.

Many professionals report feeling both enhanced and diminished when using AI: more productive yet less sharp, faster yet less thoughtful. This tension is structural, not temporary. What impresses today becomes tomorrow’s baseline expectation.

As Sam Altman observes, “This is how the singularity goes: wonders become routine, and then table stakes.” Competitive advantage from early AI adoption erodes quickly. Continuous adaptation is no longer optional. It is the new operating model.


Seven Categories of AI Risk

Every knowledge-intensive organisation faces these emerging AI transformation risks:

  1. Disintermediation Risk: Direct-to-client AI services bypass traditional professional relationships.

  2. Erosion of Differentiation: Knowledge that once took years to acquire becomes instantly accessible. You can no longer differentiate purely on what you know.

  3. Compliance and Regulatory Exposure: If you cannot explain how AI reached a recommendation, you cannot demonstrate compliance.

  4. Talent Deskilling: If AI handles all complex work, how do emerging professionals develop the expertise to question AI outputs?

  5. Data Security and Privacy: Use of offshore or opaque AI models can breach privacy and data regulations.

  6. Employment Disruption: Between 10 and 25 percent of knowledge work roles will be eliminated or fundamentally transformed within five years.

  7. Strategic Misalignment: The biggest risk of all is deploying technology carelessly, automating what you should be protecting, and overlooking what you should be amplifying.


Strategy First, Not Technology First

The fundamental mistake in AI adoption is selecting tools before defining strategy. You would not install fixtures and fittings in a house before laying the foundations. Yet organisations repeatedly make this error: choosing applications before clarifying objectives, automating processes before understanding outcomes.

A butterfly is not a caterpillar with wings. Transformation requires metamorphosis, not incremental enhancement.

This demands rethinking processes from first principles: start with outcomes, not automation; categorise what to replace versus redesign versus retain; and rethink role allocation between humans and AI.


Culture Matters More Than Technology

McKinsey research shows that while 78 percent of organisations adopt AI, resistance to change remains the number one barrier. Around 70 percent of digital transformation failures stem from culture, not technology.

Successful AI transformation requires three critical elements:

  • Trust-building: Make it safe to experiment. Question AI outputs openly. Establish transparent governance and create spaces for safe experimentation.

  • Strategic questioning: Managers must evolve from answer-providers to question-askers. When AI generates technically accurate responses instantly, the value shifts to asking better, more strategic questions.

  • Adaptive teams: Traditional training cannot keep pace with AI advancement. MIT research shows 55 percent of professional skills become outdated within months. The solution is continuous, cohort-based, collaborative learning.


The Irreplaceable Human Element

Despite AI’s growing sophistication, several dimensions of professional work remain irreducibly human:

  • Trust earned over time: AI simulates empathy but does not carry accountability. Research from Vanguard shows that with human advisers, 80 percent of clients report peace of mind; with robo-advice, only 71 percent.

  • Emotional intelligence: Great professionals know when not to speak, when to challenge, when to wait. AI can identify emotional patterns but cannot grasp the weight of decisions that change lives.

  • Ethical judgment: Real professional dilemmas involve moral trade-offs, not computational optimisation. They demand navigation of competing values and cultural context.

  • Knowing when to stop: A recent medical study (SDBench) found AI achieved 80 to 85 percent diagnostic accuracy versus doctors’ 20 percent. It was reported as evidence of AI superiority, but ignored critical factors: unnecessary testing, patient anxiety, and the human judgment of when not to test. Accuracy is not care. Efficiency is not wisdom.


Three Imperatives for Leaders

  1. Acknowledge the duality: AI extends and amputates simultaneously. Stop pretending it is only upside. Design consciously for the boundary between augmentation and erosion.

  2. Invest in culture before technology: Build trust, develop strategic questioners, and create adaptive teams. No amount of AI sophistication compensates for cultural resistance.

  3. Maintain clarity on the irreplaceable: If your AI adoption makes you more human, freeing you for deeper work, you are succeeding. If it makes you a coordinator of intelligence rather than a generator of wisdom, you have lost the plot.


The Ultimate Question

The question is not whether AI will transform professional services. It is whether you will shape that transformation intentionally, or let it shape you.

Organisations that approach AI transformation as metamorphosis, not incremental improvement, that build culture before deploying tools, and that maintain clarity on what remains fundamentally human, will be the ones still standing in five years.

The rest will be case studies in what not to do.

Mark Papendieck

Mark Papendieck is a senior financial services executive with over 20 years of experience leading innovation across wealth management, advisory, and fintech. As Chief Commercial Officer at DASH Technology Group, he drives digital transformation and growth for advice firms. Mark writes on the future of advice, AI adoption, and how technology can enhance the client experience.

LinkedIn logo icon
Back to Blog