Using AI for Your Unfair Dismissal Claim? Here’s What the Fair Work Commission Is Actually Seeing

If you’ve recently been dismissed and your first instinct was to ask ChatGPT whether you have a case — you’re not alone. Thousands of Australian workers are doing exactly the same thing. And it’s creating serious problems for everyone involved.

In February 2026, Fair Work Commission President Justice Adam Hatcher delivered a landmark presentation to the Victorian Bar Association, revealing that the Commission’s total workload has increased by over 70% in just three years. [1] The primary driver? Employees using AI tools to prepare and lodge unfair dismissal and general protections claims.

But here’s what most people don’t realise: AI isn’t just creating more claims. It’s creating worse claims — built on inflated expectations, fabricated case law, and a fundamental misunderstanding of how the Fair Work system actually works. Research from Stanford University found that general-purpose AI chatbots hallucinate on legal queries between 69% and 88% of the time [2] — and even purpose-built legal AI tools get it wrong up to one in three times. [3]

And it’s not only employees who are getting burned. Employers relying on AI to manage terminations, draft warning letters, and handle workplace investigations are making costly mistakes that often result in larger payouts than necessary.

What the Fair Work Commission Is Actually Seeing

The numbers tell a stark story. Until 2023, the Fair Work Commission dealt with roughly 30,000 matters per year. By 2024–25, that number had jumped to over 44,000. For the current financial year (2025–26), the Commission is projecting between 50,000 and 55,000 lodgments. [1]

To put that in perspective: unfair dismissal claims alone have grown 41% between 2022–23 and 2024–25. General protections dismissal claims under section 365 have surged 62% over the same period. Other general protections disputes are up a staggering 135%. [4]

Justice Hatcher was direct about the cause. The historical correlation between retrenchment rates and dismissal applications — which had held steady for decades — has broken down entirely. The timing of this break coincides precisely with the release of ChatGPT in November 2022 and the flood of AI tools that followed. [1]

Perhaps most tellingly, the Commission’s own analysis shows that approximately 85% of dismissed employees now contest their dismissal through the FWC — up from around 76% just one year earlier. [4] That’s nearly 9 in 10 dismissed workers filing a claim, compared to roughly 3 in 4 before AI tools became widely available.

There’s an important tactical dimension here too. Approximately two-thirds of general protections applications are made by individuals who don’t meet the qualifying period for an unfair dismissal claim. [5] This suggests AI tools are helping employees reframe what would be unfair dismissal claims as general protections applications to bypass eligibility requirements — a shift that’s adding significant complexity and volume to the Commission’s workload.

Why AI-Prepared Claims Are Failing

The Fair Work Commission has started identifying a clear pattern with AI-generated claims, and the problems go well beyond formatting issues.

Fabricated Case Law and AI Hallucinations

AI tools regularly invent legal authorities. In Riley v Nuvei Australia Merchant Services Pty Ltd [2026], the Commission found that the applicant had used a “legally trained” AI tool to prepare his submissions. Some of the case law he cited simply didn’t exist — the Commission noted that certain legal principles and authorities in the submission appeared to be AI hallucinations with no actual legal basis. [6]

This isn’t an isolated incident. Globally, researchers have now documented 486 cases of AI hallucinations in court filings — 324 of them in US courts alone. Self-represented individuals account for 189 of those US cases, but 128 were attributed to licensed lawyers, and 2 to judges. [7] If trained legal professionals are being caught out, the risks for unrepresented employees are significantly higher.

The academic evidence confirms the scale of the problem. A 2024 Stanford University study testing more than 200,000 queries found that general-purpose chatbots like GPT-3.5, Llama 2, and PaLM 2 hallucinated on legal queries between 69% and 88% of the time. [2] A follow-up Stanford study in 2025 tested dedicated legal AI platforms and found hallucination rates of 17% for Lexis+ AI, 33% for Westlaw AI-Assisted Research, and 43% for GPT-4. [3] These aren’t minor formatting errors — they include fabricated cases, mischaracterised authorities, and entirely invented legal principles.

Bloated, Repetitive Submissions

In Pennisi [2026], a worker lodged 53 pages of forms and submissions that the Commission identified as AI-generated. The material repeated the same arguments multiple times, with the reasoning shifting and evolving with each repetition. The Commission found it difficult to identify the relevant considerations buried within the volume of material. The application — an attempt to lodge a general protections claim six months late — was rejected. [8]

Unrealistic Settlement Expectations

Justice Hatcher demonstrated this problem in real time during his Victorian Bar Association presentation. He opened ChatGPT, told it he’d been dismissed, provided a handful of basic facts, and within 10 minutes had a ready-to-file application and witness statement. The AI told him he could realistically expect $15,000 to $40,000 in compensation. It also generated what Justice Hatcher described as a “substantially invented story” about the dismissal. [4]

The reality is starkly different. According to FWC data, of general protections dismissal matters resolved in 2024–25 that involved a monetary settlement, 33% settled for less than $4,000, and 61% settled for less than $10,000. The median monetary settlement was in the range of $4,000 to $5,999. [9] For unfair dismissal claims, the median conciliation settlement sits at approximately $8,704, and less than 1% of all claims result in a formal judgment. Only 0.76% of unfair dismissal claims are awarded compensation as a remedy by the Commission. [10]

In other words: ChatGPT told the President of the Fair Work Commission to expect $15,000–$40,000. The actual median general protections settlement is $4,000–$5,999.

This is the core problem employees face when using AI for workplace disputes. The tool doesn’t assess the actual strength of your case. It doesn’t know your employer’s side of the story. It doesn’t understand the strict time limits and procedural requirements that apply to your situation. It simply produces confident, polished output that looks professional but may have no basis in reality.

The Commission’s Response: New Disclosure Rules and Potential Penalties

The Fair Work Commission isn’t just observing this trend — it’s acting on it. On 24 March 2026, the Commission published an exposure draft of its formal Guidance Note: Use of Generative Artificial Intelligence in Commission cases, setting out three specific requirements that will apply to anyone who uses GenAI to prepare documents for lodgment. [11] [17]

Disclosure: If you use GenAI to prepare any application or document for a Commission case, you must state in the document that GenAI was used. This will be built into all Commission forms as a new “Use of GenAI” section. [17]

Verification: You must check the document and confirm that all details — including references to facts, legislation, and case law — are correct and relevant to your case. You must state in the document that this checking has been done. For legal practitioners and paid agents, there is an additional requirement: you must include hyperlinks to all case law cited. [17]

Consequences for non-compliance: Justice Hatcher has signalled that failure to comply with these requirements could result in applications being dismissed or costs orders being made against the applicant. [8]

Legislative reform: The Commission is seeking amendments from the Federal Government to allow more matters to be dealt with on the papers and to expand its powers to dismiss claims that have no reasonable prospects of success. [12]

Witness statements: If the document is a witness statement or declaration, the witness must check that it is based on their own knowledge, confirm it is true to the best of their knowledge, and declare this in the document. [17]

Procedural tightening: The FWC has already reformed the general protections application process. The amended Form F8 now requires a more rigorous articulation of the claim, replacing the old practice of simply ticking boxes. Applications lodged outside the 21-day timeframe must explain why exceptional circumstances apply, and a Commission member will review these before the application is even sent to the employer. [9]

The message is clear: using AI to prepare your claim is not prohibited, but relying on it without proper verification could cost you your case — and potentially result in a costs order against you.

The Commission also recommends that anyone preparing documents for lodgment should not provide personal or confidential information to a public GenAI tool, or to any GenAI tool that may not keep that information secure from disclosure. [17]

For Employees: What AI Can’t Do for You

If you’re considering making an unfair dismissal or general protections claim, it’s important to understand what AI tools actually deliver — and what they can’t.

AI can give you a general overview of your rights under the Fair Work Act. It can explain what an unfair dismissal claim involves. It can help you understand basic terminology.

AI cannot assess the specific merits of your claim. It cannot account for your employer’s perspective or the evidence they may present. It cannot navigate the strict 21-day time limit and procedural requirements that apply to Fair Work applications. It cannot verify that the legal authorities it cites actually exist. And critically, it cannot give you a realistic assessment of what your claim is actually worth.

The real danger isn’t that AI gives you no information — it’s that it gives you confident but wrong information. Stanford researchers found that AI models tend towards overconfidence on legal queries, often overstating their certainty even when their answers are incorrect. [2] A polished, professional-looking application that cites fabricated case law and overstates your prospects does more harm than no application at all.

It’s also worth noting that more than half of general protections applicants are self-represented — only 46% had a lawyer or paid agent between July 2022 and September 2025. [9] Self-represented applicants relying on AI are the most vulnerable to these problems, and they’re exactly the people the Commission’s new disclosure requirements are designed to address.

For context, the unfair dismissal compensation cap is currently $91,550 for dismissals occurring on or after 1 July 2025, and the application fee is $89.70. [13] But the maximum is not the typical outcome. The actual median settlement figures should inform your expectations — not the figures ChatGPT generates.

For Employers: The Hidden Costs of AI-Managed Processes

The AI problem isn’t limited to employees. Employers — particularly small business owners — are increasingly using AI tools to draft termination letters, manage performance improvement processes, conduct workplace investigations, and even respond to Fair Work applications.

The risks here are significant and often more expensive than the professional advice would have been.

Procedural deficiencies: AI doesn’t know your enterprise agreement, your award, or the specific policies that apply to your business. A generic termination process that misses a required step can turn a straightforward dismissal into a successful unfair dismissal claim. The Commission considers multiple factors under section 387 of the Fair Work Act, including whether the employee was notified of the reason, given an opportunity to respond, and warned about unsatisfactory performance. [14] AI-generated processes routinely miss one or more of these steps.

Documentation gaps: AI-drafted warning letters and show cause notices often miss critical elements — specific allegations, adequate response timeframes, or references to the correct provisions. These gaps become weaknesses that employee advocates will exploit.

Settlement inflation: When an employer’s process is flawed, the cost of settling the resulting claim increases substantially. A dismissal that would have been defensible with proper process often becomes a matter where the employer has to settle at a higher figure simply because the AI-generated process created vulnerabilities.

Volume of claims to defend: Employers should also understand what they’re facing from the other direction. With AI lowering the barrier to filing, employers can expect more claims — including from employees who wouldn’t have pursued the matter without AI assistance. Legal research firm Littler advises employers to expect more “plausible but weak” internal grievances and external claims, and to focus on the merits of claims themselves rather than debating how they were generated. [15]

The irony is clear: employers who use AI to save on legal costs during the termination process frequently end up paying more in the conciliation or hearing that follows.

Why Professional Advice Still Matters More Than Ever

The rise of AI in workplace disputes doesn’t make professional advice less relevant — it makes it more important than it’s ever been.

For employees, working with an experienced workplace relations professional means getting an honest assessment of your claim before you invest time, emotional energy, and potentially money into a process that may not succeed. It means submissions that cite real case law, present facts accurately, and set realistic expectations about outcomes. Approximately 75% of unfair dismissal cases settle successfully at conciliation [16] — but achieving a fair settlement requires understanding what your claim is actually worth, not what ChatGPT tells you it’s worth.

For employers, professional advice means getting the process right from the start. Proper termination procedures, compliant documentation, and defensible investigation processes are the foundation of avoiding costly claims. And when a claim does land, having someone who can quickly assess its merits and manage the response saves time, money, and stress.

At Fair Workplace Solutions, we work with both employees and employers across NSW and Australia. We’ve seen first-hand how AI-prepared claims and AI-managed processes play out at the Commission — and the common thread is clear: the people who get proper advice early spend less and achieve better outcomes.

What to Do Next

Whether you’re an employee who’s been dismissed or an employer facing a claim, the most important step is the same: get informed advice before you act.

Using AI to understand your general rights is fine. Using it to prepare and file a claim at the Fair Work Commission — or to manage a termination process — is a risk that increasingly carries real consequences.

If you’d like to discuss your situation with someone who understands how the Fair Work system actually works, contact Fair Workplace Solutions for a confidential consultation.


Sources

All statistics and claims in this article are sourced from the following primary documents, academic research, and legal analysis.

    1. Justice Adam Hatcher, “A Disrupted Future: Artificial Intelligence and the Fair Work Commission,” Presentation to the Victorian Bar Association, 18 February 2026. View PDF (fwc.gov.au)
    2. Dahl, M. et al., “Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models,” Stanford RegLab / Stanford Institute for Human-Centered AI, 2024. View study (hai.stanford.edu)
    3. Magesh, V. et al., “Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools,” Journal of Empirical Legal Studies, 2025. View PDF (dho.stanford.edu)
    4. Dynamic Business, “AI-Written Fair Work Claims Are Surging. Small Businesses Are Paying the Price,” February 2026.
    5. Justice Hatcher, Presentation to the Australian Industry Group, 2025–2026.
    6. Australian Workplace News & Analysis, “AI Hallucinations in Fair Work Claims: Fake Precedents Slammed,” 2026.
    7. Cronkite News / Damien Charlotin’s AI Hallucination Cases Database (HEC Paris), “As More Lawyers Fall for AI Hallucinations, ChatGPT Says: Check My Work,” October 2025. Read article (cronkitenews.azpbs.org)
    8. Employment Law Handbook Australia, “Dismissed Employees Flooding FWC with AI-Generated Claims,” 2026.
    9. HWLE Lawyers, “Fair Work Commission to Reform General Protections Dismissal Application Process,” 2026.
    10. ACAPMAg, “Unfair Dismissal Outcomes, Stats and Myths,” 2024. Read article (acapmag.com.au)
    11. Inside Small Business, “AI-Driven Claims ‘Overwhelming’ Fair Work Commission Staff,” February 2026. Read article (insidesmallbusiness.com.au)
    12. iTnews, “Fair Work Commission Bogged Down by AI Filings,” February 2026. Read article (itnews.com.au)
    13. Fair Work Commission Bulletin, Volume 7/25, 3 July 2025. View PDF (fwc.gov.au)
    14. Fair Work Commission, Unfair Dismissals Benchbook, Published 1 July 2025. View PDF (fwc.gov.au)
    15. Littler, “Australia: AI Assisted Claims Are Here,” 2026. Read article (littler.com)
    16. Fair Workplace Solutions, “Appealing Unfair Dismissal Decisions in Australia,” August 2025. Read article (fairworkplacesolutions.com.au)
    17. Justice Hatcher, President’s Statement: “Exposure draft of the Commission’s Guidance Note: Use of Generative Artificial Intelligence in Commission cases — opportunity to comment,” Fair Work Commission, 24 March 2026. View PDF (fwc.gov.au)