The Problem with Black-Box AI for NZ Service Businesses
By Toby Bell-Ramsay, AliceHQ
AI Is No Longer Answering Questions — It's Taking Actions
There is a useful distinction between the first generation of AI customer service tools and the generation that is being deployed now. The first generation answered questions. It sat in a chat widget, responded to FAQ-style queries, and handed off to a human when things got complicated. The worst that could happen was a wrong answer. The customer got confused, or frustrated, and asked to speak to someone. The business could correct it. No transaction had been made. Nothing had been written to any system.
The current generation is different. AI voice agents now book guests into your property management system. They log maintenance requests. They create calendar appointments. They update CRM records. They take deposit payments. They triage patient calls and route them to clinical staff. They are not answering questions about your business — they are operating your business. And when an AI takes a real operational action on behalf of your business, it creates real operational liability.
The question that most operators have not yet asked themselves is: if my AI does something and I need to explain it — to a customer, to a regulator, to a lawyer — can I? The answer, for most currently available AI systems, is no. That is the Black-Box AI problem.
The Air Canada Problem, Closer to Home
The Air Canada case is now the canonical example of AI liability, and it is worth understanding precisely what happened. Air Canada's chatbot told a grieving passenger that he could apply for a bereavement discount after his flight, and then claim a refund. This was not Air Canada's policy. The chatbot invented it. When Air Canada tried to disclaim responsibility by saying the chatbot was “a separate legal entity,” the Civil Resolution Tribunal rejected that argument entirely. Air Canada was held liable for what its AI said, and ordered to pay the claimed discount plus costs.
NZ Consumer Law operates on the same principles. When your AI communicates with a customer — quotes a price, confirms a booking, describes a policy, makes an appointment — those communications are representations made on behalf of your business. If they are incorrect, or if a customer later disputes what was communicated, your business is accountable for what the AI said.
For NZ hospitality operators, the scenario is not hypothetical. A guest books by phone at 2am through your AI system. The AI quotes a rate. The guest arrives and the rate at check-in is different. That guest is not going to calmly accept the discrepancy — they are going to produce their phone and say “your system told me this.” For a property management firm, the scenario is a tenant maintenance call that the AI handled three months ago — and the Tenancy Compliance and Investigations Team is now asking for evidence that it was received and actioned within your required response window. For a trades business, it is a customer who says the AI quoted $200 and the invoice says $350.
In each case, the question is the same: what did your AI do, exactly? And if the AI is a black box, you cannot answer it.
What the Regulations Are About to Require
The regulatory environment for AI in NZ and Australia is not waiting for vendors to catch up. Several frameworks are already in effect or activating within the next twelve months.
The NZ Privacy Act 2020 applies to any AI system handling personal information, including voice agents. Information Privacy Principle 5 requires businesses to protect personal information against loss, unauthorised access, use, modification, or disclosure. Principle 8 requires information to be accurate. The Office of the Privacy Commissioner has explicitly stated that AI systems making or contributing to decisions affecting individuals should maintain an audit trail of those decisions. This is not guidance for some future AI regulation — it is the current interpretation of existing law applied to current AI deployments.
For healthcare providers, the Health Information Privacy Code 2020 (HIPC) is stricter still. HIPC Rule 5 requires agencies to protect health information from loss or unauthorised access. Rule 10 requires that health information only be used for the purpose for which it was collected — which requires you to be able to trace what happened to it. If an AI triage system routes a patient call, the receipt of that decision is potentially the only evidence that the information was handled appropriately.
Australia's Privacy and Other Legislation Amendment Act 2024 introduced new APP 1.7 requirements that activate from December 2026. Any regulated entity using computer programs to make or substantially contribute to decisions that “significantly affect individuals’ rights or interests” must disclose this in their privacy policy and maintain evidence of the decision-making process. Customer-facing AI voice agents that book, triage, and update records are squarely within scope. For NZ businesses with Australian operations or customers, the nine months before December 2026 is the window to implement systems that can satisfy that evidence requirement.
The Receipted AI Answer
Receipted AI is a category of AI that treats every operational action the way a business treats every financial transaction: with a receipt. When a Receipted AI agent takes an action — creates a booking, writes to a CRM, logs a maintenance request, routes a patient call — it produces an immutable, timestamped record of that action. Not just a call transcript. A receipt showing the action taken, the system it was written to, the ID returned by that system, and the timestamp. That receipt is stored indefinitely and is accessible for review, export, and production as evidence.
The distinction from auditable AI is important. Auditable AI, as the term is used across the industry, typically means call logs, transcripts, and quality scores — records of the conversation. Receipted AI records what the conversation caused downstream. The booking that now exists in your PMS. The CRM record that was updated. The job card that was written to ServiceM8. This is a different product from a different architectural approach, and no other vendor in NZ or Australia has named it or claimed it as a category. AliceHQ does.
What This Looks Like in Practice
For a property management firm, it looks like this: a tenant calls at 11pm about a water leak. Alice handles the call, logs the maintenance request in Re-Leased, classifies the urgency according to the firm's rules, and notifies the property manager. Every one of those steps produces a receipt. The timestamp on the tenant call. The Re-Leased job ID created. The urgency classification applied. The notification sent and to whom. When TCIT asks what happened, the property manager pulls up the receipt and answers the question in thirty seconds.
For a hotel, it looks like this: Alice takes a direct booking at 2am for a guest arriving Friday. The receipt shows the rate quoted, the room type confirmed, the PMS booking reference created, the confirmation sent to the guest, and the exact timestamp. If the guest arrives and claims a different rate was quoted, the receipt resolves the dispute. If the booking needs to be found in the PMS, the receipt contains the reference number. If there is ever a Privacy Commissioner inquiry about how the guest's personal information was handled during that call, the receipt is the evidence trail.
For a trades business, it looks like this: Alice handles an after-hours job enquiry, captures the job details, quotes a rate range from the approved price list, and creates the job card in ServiceM8. The receipt shows what price range was communicated, the job category assigned, the ServiceM8 job number, and the timestamp. If the customer later disputes what the AI quoted, you have the receipt. If your head office wants to audit your after-hours booking process, you have the receipts.
This is not a compliance feature. It is the operational foundation that makes AI-operated businesses defensible. Every business owner who has ever had a customer dispute, a regulator inquiry, or an insurance claim knows the value of documentation. Receipted AI is documentation for every AI action, automatically, from day one.
AliceHQ is the Receipted AI platform for NZ service businesses. Learn what Receipted AI means →
See Receipted AI in Your Business
A 30-day pilot gives you a full receipt log for every AI action. No black box. Proof for every step.