If you’ve been paying attention to AI news this month, you likely saw the story that recently appeared in numerous media outlets about Air Canada’s chatbot and the recent Air Canada lawsuit. What exactly happened and what does the outcome mean for generative AI’s place in business? Let’s break it down.
What Did Air Canada Do?
Air Canada , Canada’s largest commercial airline , has been ordered by the Canadian Civil Resolution Tribunal to pay $812.02 in damages and court fees to a customer who was misled by the company’s chatbot.
Air Canada customer Jake Moffat was trying to book a flight from Vancouver to Toronto using Air Canada’s website after their grandmother died. Moffat wasn’t sure how the airline’s bereavement travel policy worked, so they did what many of us would have done in the same situation: they asked the chatbot assistant that was available at the time (it’s since been taken down) on Air Canada’s website.
The chatbot dutifully informed Moffat: “if you need to travel immediately or have already travelled and would like to submit your ticket for a reduced bereavement rate, kindly do so within 90 days of the date your ticket was issued by completing our Ticket Refund Application form.”
Moffat took the chatbot at its word and purchased a ticket at full price, expecting to be able to apply for a bereavement discount after the trip.
Just one problem; the bereavement policy Air Canada’s chatbot shared with Moffat is totally made up. Even worse, it directly contradicts the company’s actual policy, which states that the bereavement discount “does not apply to requests for bereavement consideration after travel has been completed.” Whoops.
What Was the Result of Moffat v. Air Canada?
After Air Canada refused to own up to its mistake, Moffat filed a claim with Canada’s Civil Resolution Tribunal to get the refund the chatbot promised them. In a court order dated February 14th, Full-Time Tribunal Member Christopher Rivers decided the case in Moffat’s favor, citing “negligent misrepresentation” on Air Canada’s part and calling the airline’s argument that it should not be held responsible for its chatbot’s actions “remarkable.”
As Rivers puts it: “While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.”
What Does Moffat v. Air Canada Mean for AI in Business?
In the grand scheme of a multi-billion-dollar company’s finances, one $800 fine in small claims court is barely noticeable. Where Air Canada’s blunder will almost certainly hurt it much worse is in the consumer trust department. Air Canada’s irresponsible use of its chatbot, along with its dismissive stance in general toward corporations’ obligation to use AI responsibly, is not a particularly good look in a business environment where many consumers are already leery of the technology.
Moffat v. Air Canada should serve as a cautionary tale for every business owner who plans to participate in the AI craze. There’s nothing wrong with using artificial intelligence to aid your business (in fact, there are many advantages to doing so), but AI still needs human supervision. As Air Canada found out the hard way, replacing your human customer service agents with unchaperoned chatbots or populating your website with AI-generated content that a human hasn’t vetted is a recipe for disaster , and if your company doesn’t happen to have the financial resources of one of the largest airlines in the world, you might find that such a disaster is enough to put you out of business for good.
How to Use AI Content at Your Business Responsibly
To avoid repeating Air Canada’s mistake, don’t overestimate generative AI’s capabilities. It doesn’t always tell the truth and it certainly doesn’t know everything. If you’re using AI for content creation, one of the best steps you can take to protect the integrity of your site’s information is to carefully proofread and edit anything you generate before you publish it.
Here’s a quick checklist to help you make sure you’ve thoroughly screened your AI content:
1. Have You Checked for Factual Accuracy?
Generative AI can get facts wrong (and it often does). Just because “a chatbot said it” doesn’t mean it’s true , and as Air Canada v. Moffat shows us, it’s the responsibility of companies to make sure their chatbots aren’t lying to customers. Likewise, if your business is using generative AI to write content for its website, you need to double-check the factual accuracy of everything the AI writes.
2. Have You Checked for Canned Responses?
Simply asking an AI tool like ChatGPT to write an article or a product description doesn’t automatically result in content that’s ready to be copied, pasted, and published (in fact, it rarely does). For example, you’ll need to sort through the content to make sure it doesn’t contain any canned responses. Believe it or not, businesses sometimes publish generic AI text by mistake simply because no one bothered to proofread it first. As you can probably guess, it’s not hard to spot.
3. Have You Checked for Repetition?
Generative AI tools are great at saying almost nothing in a lot of words. Make sure your 800-word, AI-assisted article is not just stretching out 200 words worth of information to meet the word count you requested. Sometimes, an AI tool may even repeat an entire paragraph or section verbatim.
4. Have You Checked for Low-Quality Writing?
While AI has gotten remarkably better at writing over the past couple of years, most models are still not capable of producing human-quality writing. Even if your AI-generated content is factually accurate and free of canned responses and repetition, you will likely notice it still feels lifeless and formulaic most of the time. In the future, we may begin to see AI-generated content that’s truly indistinguishable from human-written content, but for now, even a quick pass by a skilled human writer can dramatically improve the quality of an AI-generated piece.
Professional Content Creation and AI-Editing Services
Effective AI editing takes time and skill. If you want to produce AI-generated SEO content for your business at scale, you’ll need to be prepared to invest significant resources. If you don’t want to end up on the wrong side of AI chatbot news, work with experts to make sure all of your content is work you can stand behind.
At Content Cucumber, we specialize in crafting content that drives traffic for your business and strengthens your digital brand image at the same time. Whether you’re looking for help proofreading and polishing AI-generated drafts or writing expert content from scratch, our writers have you covered.
Frequently Asked Questions
Air Canada was ordered to pay $812.02 in damages after its chatbot gave a customer false information about bereavement travel refunds, promising a discount could be applied after travel when the actual policy prohibited retroactive requests.
Jake Moffat was the Air Canada customer who sued after being misled by the airline's chatbot about bereavement travel discounts when booking a flight after their grandmother died.
The Canadian Civil Resolution Tribunal ruled in Moffat's favor, stating Air Canada is responsible for all information on its website including chatbot responses, citing negligent misrepresentation.
Yes, companies are responsible for all information on their websites, including chatbot responses, as the chatbot is just a part of the company's website.
Businesses should provide human supervision for AI chatbots and carefully vet all AI-generated content before publishing to ensure factual accuracy and avoid misinformation.
Check for factual accuracy, canned responses, repetition, and low-quality writing to ensure AI content meets professional standards before publishing.
Air Canada's actual policy states that bereavement discounts do not apply to requests made after travel has been completed, contrary to what the chatbot told the customer.
Air Canada lost because the tribunal found the company negligently misrepresented its policy through the chatbot and rejected Air Canada's argument that it shouldn't be held responsible for its chatbot's actions.
Unsupervised AI chatbots can provide false information to customers, leading to legal liability, financial damages, and loss of consumer trust for businesses.
No, AI-generated content should not be published without human review as it can contain factual errors, repetition, canned responses, and low-quality writing that could harm a business's reputation.

