Alex is Sprintlaw’s co-founder and principal lawyer. Alex previously worked at a top-tier firm as a lawyer specialising in technology and media contracts, and founded a digital agency which he sold in 2015.
Artificial intelligence now sits behind many of the everyday digital tools New Zealand businesses rely on. It helps triage customer enquiries, personalise marketing, screen job applicants, flag fraud, analyse documents and automate repetitive work. Because much of this technology isn’t explicitly labelled as “AI”, many Kiwi organisations underestimate how deeply embedded it already is.
As AI becomes more widespread, governments are introducing new rules to ensure it is used responsibly. The most influential development so far is the EU Artificial Intelligence Act (EU AI Act) - the first comprehensive legal framework anywhere in the world regulating how AI must be developed, deployed and monitored. Although this is an EU law, it applies globally whenever an AI system or the outputs it generates are used inside the EU. For New Zealand businesses that serve European clients, attract EU users or license AI-enabled tools internationally, this can create obligations that are easy to overlook.
This guide breaks down the EU AI Act in clear, practical terms, with examples tailored for NZ SMEs and organisations looking to stay compliant when engaging with EU markets.
What the EU AI Act Is Trying to Achieve
At its core, the Act aims to ensure that AI is used safely, fairly and transparently. It seeks to prevent harmful uses of AI while supporting responsible innovation. To achieve this balance, the EU uses a risk-based approach, recognising that not all AI is created equal. Systems that pose little risk to people or society face minimal requirements, while tools used in sensitive areas come with more substantial obligations.
For New Zealand businesses, this approach is helpful. Many familiar AI tools fall comfortably into low-risk categories, and only certain industry uses - such as hiring, credit assessments, education or essential service access - trigger higher levels of regulation.
What Counts as an AI System?
The Act defines an AI system as a machine-based system that infers from input data how to generate outputs such as predictions, recommendations, decisions or content. This definition includes systems built using machine-learning techniques, logic- or knowledge-based approaches and statistical or optimisation methods. What it does not include are simple, transparent rule-based systems like basic “if–then” workflows.
In practical terms, this means many tools used routinely in New Zealand qualify as AI systems under the Act - such as applicant-ranking platforms, identity-verification tools used by fintechs, chatbots for customer service, fraud-scoring models or AI-generated marketing content. Mapping which systems your organisation uses is the first step in understanding whether any EU AI Act obligations apply.
Does the EU AI Act Apply to New Zealand Businesses?
It can, and often in ways businesses do not expect. A company does not need a European office or entity to fall within the scope of the Act. What matters is whether the AI system, or the outputs it produces, are used inside the EU.
If an Auckland HR consultant uses AI to shortlist candidates for a German employer, the Act may apply because the output influences an EU-based decision. A Wellington SaaS provider offering AI-powered risk assessments to an EU client may also fall into regulated territory, depending on the purpose of the analysis. A Christchurch creative agency producing AI-generated images or video for EU audiences may trigger transparency obligations. Even a New Zealand fintech using AI for identity-verification may fall into scope if EU customers rely on those results.
A website being visible in Europe is not enough. What matters is whether EU users meaningfully interact with the AI or whether the AI’s outputs have a real effect in EU contexts.
How the EU AI Act Categorises Risk
The Act divides AI into four categories: unacceptable, high, limited and minimal risk. Each category comes with different expectations.
| Risk Level | Meaning | What It Requires |
| Unacceptable Risk | AI uses that pose unacceptable harm to people’s rights, safety or democratic values. These systems are banned entirely. | Cannot be sold, provided or used in the EU. Includes practices such as social scoring and certain forms of biometric surveillance. |
| High Risk | AI used in sensitive areas where decisions can significantly affect individuals (such as hiring, credit, education or essential services). | Strict obligations: documented risk management, high-quality data, human oversight, technical documentation, monitoring and (in many cases) conformity assessments. |
| Limited Risk | AI where the main concern is that users may not realise AI is involved. | Transparency duties. Users must be told when they are interacting with AI or receiving AI-generated or AI-altered content. |
| Minimal Risk | Everyday AI tools with very low potential for harm (such as spam filters and simple recommendations). | No additional risk-specific duties. Businesses still need basic responsible-use practices and staff AI literacy. |
Some uses of AI are considered fundamentally incompatible with EU values and are banned outright. These include systems that manipulate vulnerable individuals, emotion-recognition technology in schools or workplaces, social-scoring systems used by governments and the untargeted scraping of facial images for biometric databases. While few New Zealand businesses intentionally operate in these spaces, it is still essential to check that third-party tools do not contain prohibited capabilities.
The most significant category for New Zealand businesses working with EU clients is high-risk AI. These systems are permitted but subject to stringent requirements because they can affect people’s rights, safety or access to essential services. High-risk classification depends not on the technology itself but on the context and purpose of its use. For example, AI used in recruitment, education, credit decisions, healthcare or public-sector functions may fall into this category.
An additional nuance is that even some lower-risk systems can become subject to high-risk obligations if they are integrated into, or relied upon as part of, a high-risk workflow. For example, a relatively simple AI model that influences a hiring decision may need to meet high-risk standards because of the downstream context.
High-risk systems require strong controls around data quality, documentation, human oversight and ongoing monitoring.
A crucial distinction here is between providers and deployers. Providers - the organisations that develop or substantially modify an AI system - carry the heaviest obligations, particularly around documentation and technical compliance. Deployers - the businesses using AI created by someone else - have narrower duties, such as ensuring the system is used correctly, supervising outputs and supplying lawful, suitable data.
Limited-risk AI mainly raises transparency concerns. When users might be misled about whether they are interacting with an AI system, organisations must make that clear. For example, a NZ retailer using a chatbot for EU customers must disclose that the user is dealing with an automated system. Similarly, an agency creating synthetic media for EU audiences may need to label it if it could reasonably be taken as authentic.
The vast majority of everyday AI tools fall within the minimal-risk category - email filters, writing assistants, simple recommendation engines and other productivity features. These systems do not attract additional risk-specific duties. However, all organisations using AI, including minimal-risk tools, must meet the Act’s general expectations around AI literacy and responsible use.
Providers, Deployers and the AI Office
Most New Zealand SMEs will be deployers, but a business becomes a provider if it develops an AI system, fine-tunes a model or substantially modifies how an existing one operates. It is worth understanding this threshold before adapting any AI system.
To support consistent enforcement, the EU has created the AI Office, a central body responsible for overseeing general-purpose AI models and coordinating supervision across the EU. Its guidance will shape expectations for businesses interacting with European markets.
General-Purpose AI (GPAI)
Large, multi-use models - such as GPT, Claude and Gemini - are treated separately as general-purpose AI. The strictest obligations fall on the developers of these models, not on everyday business users. However, New Zealand businesses may take on obligations if they fine-tune a GPAI model, use it to build a high-risk system or provide EU users with a GPAI-enabled tool in a way that changes its risk profile.
The EU AI Act Timeline
The Act entered into force in August 2024. Its earliest requirements - such as the bans on certain practices and general AI-literacy duties - began applying from February 2025. Obligations for general-purpose AI models start in August 2025. Most high-risk system obligations phase in gradually from 2026 to 2027, giving New Zealand businesses time to prepare.
What New Zealand SMEs Can Do Now
A practical starting point is identifying whether any part of your AI use interacts with Europe. Once this is clear, mapping the AI tools you rely on becomes straightforward. From there, you can review how they use data, what kind of oversight they require and whether any transparency obligations apply. Because many EU clients are now requesting contractual assurances about AI compliance, updating your agreements early can help you stay competitive.
These EU requirements also sit alongside New Zealand’s own legal obligations, including privacy and data-protection rules. Reviewing both together ensures your AI practices remain compliant across jurisdictions.
Final Thoughts
The EU AI Act represents a major shift in global AI regulation. For New Zealand businesses, the key question is not where you are located but where your AI has an impact. While the rules may appear complex at first glance, most SME-level tools fall into lower-risk categories. Where higher-risk uses arise, early preparation makes compliance manageable.
By taking stock of your AI systems, improving data governance, being transparent with users and updating your contracts, you can continue serving EU clients confidently and meet the expectations of an evolving regulatory landscape. When questions arise, targeted legal advice can help you navigate your obligations clearly and practically.
If you would like a consultation on legal compliance with the EU AI Act, you can reach us at 0800 002 184 or team@sprintlaw.co.nz for a free, no-obligations chat.


