By UNOS SOFTWARE AS · Published 11 March 2026

The AI Act arrives in August 2026: What Norwegian businesses need to know

The EU AI Act takes effect in Norway in August 2026. Here's a practical overview of risk categories, requirements for Norwegian businesses, and a checklist for developers building AI systems.

  • ai-act
  • eu-ai-act
  • regulation
  • compliance
  • artificial-intelligence
  • GDPR

Abstract illustration of artificial intelligence and regulation

The EU AI Act is the world's most comprehensive regulation of artificial intelligence. It takes effect as Norwegian law in August 2026 through the EEA Agreement, affecting all Norwegian businesses that develop, distribute, or use AI systems — with requirements for documentation, risk management, and transparency.

For many companies, this means new obligations, documentation requirements, and potentially significant fines for non-compliance. But it also represents an opportunity to build trust with customers and differentiate from competitors who cut corners.

In this article, we provide a practical overview of what the law entails, who is affected, and what you should be doing now.

What is the AI Act?

The EU AI Act is a European regulation governing the development, distribution, and use of artificial intelligence. The law classifies AI systems into four risk levels, from minimal to unacceptable risk, and imposes stricter requirements the higher the risk a system poses. Norway is bound through the EEA Agreement, with the Norwegian Data Protection Authority (Datatilsynet) and the Norwegian Communications Authority (Nkom) sharing supervisory responsibilities.

Background: Why regulate AI?

The EU adopted the AI Act in March 2024 after three years of negotiations. The regulation was designed with one clear goal: ensuring that AI systems used in Europe are safe, transparent, and respect fundamental rights — without stifling innovation.

Norway is bound through the EEA Agreement, and Norwegian authorities have been working on incorporation since 2024. The Norwegian Data Protection Authority (Datatilsynet) and the Norwegian Communications Authority (Nkom) will share supervisory responsibilities, with Datatilsynet handling AI systems that affect personal data, and Nkom responsible for general-purpose AI models and market surveillance.

The timeline

  • February 2025: Prohibited AI practices take effect in the EU
  • August 2025: Rules for general-purpose AI models (GPAI) take effect in the EU
  • August 2026: Main rules take effect — including high-risk requirements and Norwegian implementation via EEA
  • August 2027: Extended requirements for certain high-risk systems in Annex III

Risk-based approach: Four categories

Data visualization and analysis on screen

At the core of the AI Act is a risk-based classification system. The higher the risk an AI system poses to health, safety, or fundamental rights, the stricter the requirements.

Risk level Description Examples Requirements
Unacceptable risk AI systems that threaten fundamental rights Social scoring, manipulation of vulnerable groups, real-time biometric mass surveillance Total ban
High risk AI systems used in critical sectors Recruitment tools, credit scoring, medical diagnostics, educational assessment, border control Strict requirements for documentation, testing, monitoring, and human oversight
Limited risk AI systems with transparency needs Chatbots, deepfake generators, emotion recognition Disclosure obligation — users must know they are interacting with AI
Minimal risk Low-risk AI systems Spam filters, AI in games, automated recommendations No specific requirements, but voluntary codes of conduct are encouraged

What does "high risk" mean in practice?

Most Norwegian businesses developing business-critical software will encounter high-risk systems. Consider:

  • HR tools that use AI to filter job applications
  • Financial services that use AI for credit scoring or fraud detection
  • Healthcare solutions that use AI for diagnostics or treatment suggestions
  • Educational platforms that use AI to assess student performance

If you develop or use such systems, you are likely subject to the strictest requirements.

Requirements for high-risk systems

For systems classified as high risk, the AI Act requires the following:

1. Risk management system

You must establish a documented risk management system that identifies, analyzes, and manages risks throughout the system's entire lifecycle. This is not a one-time exercise — it requires continuous updates.

2. Data quality and data governance

Training data, validation data, and test data must meet quality criteria for relevance, representativeness, and accuracy. You must document data sources and address known biases.

3. Technical documentation

The system must have comprehensive technical documentation that enables assessment of whether it meets the requirements — including system description, design choices, training methodology, and performance metrics.

4. Logging and traceability

High-risk systems must automatically log events so that usage can be traced afterward. Logs must be retained in accordance with the system's purpose and made available to supervisory authorities.

5. Human oversight

The system must be designed so that humans can monitor and intervene in its operation. For critical decisions, the AI system should support — not replace — human judgment.

6. Accuracy, robustness, and cybersecurity

The system must achieve adequate accuracy, be robust against errors and attacks, and be protected with appropriate cybersecurity measures.

7. CE marking and declaration of conformity

Before a high-risk system is placed on the market, it must undergo a conformity assessment and receive CE marking.

Practical checklist for Norwegian developers

If you are building software that uses AI, you should start preparing now. Here is a checklist:

Mapping and classification

  • Identify all AI components in your products and services
  • Classify each system by risk level (unacceptable, high, limited, minimal)
  • Document the rationale for the classification

Documentation and governance

  • Establish a risk management system for high-risk systems
  • Document data quality, training data, and known biases
  • Create technical documentation covering the entire lifecycle
  • Implement logging and traceability in AI components

Testing and validation

  • Conduct thorough testing for accuracy, robustness, and biases
  • Establish procedures for continuous post-deployment monitoring
  • Test for resilience against adversarial attacks

Transparency and user rights

  • Inform users when they are interacting with AI (applies to all risk levels)
  • Ensure high-risk systems have mechanisms for human oversight
  • Document limitations and intended use of the system

Organizational readiness

  • Designate responsible person(s) for AI compliance in your organization
  • Conduct training for developers and product owners
  • Establish procedures for notifying supervisory authorities of serious incidents

Penalties: Fines up to 35 million euros

The AI Act has a penalty regime reminiscent of GDPR — but with even higher maximum rates:

Violation Maximum fine
Use of prohibited AI systems EUR 35 million or 7% of global turnover
Breach of high-risk requirements EUR 15 million or 3% of global turnover
Misinformation to supervisory authorities EUR 7.5 million or 1% of global turnover

For SMEs and startups, proportionally lower fines apply, but they are still significant. Datatilsynet has demonstrated through GDPR enforcement that Norwegian authorities take European regulation seriously — there is no reason to believe the AI Act will be any different.

Who is responsible?

The AI Act distinguishes between several roles:

  • Provider: The entity that develops the AI system and places it on the market. Bears primary responsibility for compliance.
  • Deployer: The entity that puts the AI system into use in its business. Has obligations related to use, monitoring, and informing affected individuals.
  • Importer and distributor: Actors in the value chain who also have certain obligations.

If you both develop and use the AI system within your own business, you are both provider and deployer — and have obligations from both roles.

General-purpose AI models (GPAI): Additional rules

Network of interconnected nodes representing AI systems

The AI Act has specific rules for general-purpose AI models — such as large language models (LLMs). All GPAI providers must:

  • Maintain up-to-date technical documentation
  • Have policies for compliance with copyright law
  • Publish a sufficiently detailed summary of training data

GPAI models with "systemic risk" (models trained with more than 10^25 FLOPs) have additional requirements, including model evaluations, adversarial testing, and incident reporting.

For most Norwegian businesses, this primarily means you need to understand which GPAI models you use in your products and ensure the provider meets its obligations.

What does this mean for Norwegian development teams?

For software teams in Norway, the AI Act represents a fundamental shift in how AI projects are run:

AI is no longer a "just try it" project. You need a deliberate strategy for which AI components you use, why you use them, and how you monitor them.

Documentation becomes as important as the code. Technical documentation, risk management, and data quality reports are not optional — they are legally required for high-risk systems.

Testing must be expanded. In addition to functional testing, you need testing for biases, robustness, and adversarial scenarios.

Responsibility sits with leadership. AI compliance is not something the development team can solve alone — it requires commitment from the executive team and cross-functional collaboration.

How we can help

At UNOS SOFTWARE AS, we have followed AI legislation closely since the European Commission's original proposal in 2021. We help Norwegian businesses with:

  • Mapping and classification of AI systems — we review existing solutions and identify which systems fall under which risk categories
  • Technical consulting for building compliant AI systems — from architecture and data handling to logging and monitoring through our technical consulting service
  • Development of AI systems with compliance built in from the start — our software development service ensures that risk management, documentation, and testing are integrated into the development process

The AI Act is not an obstacle — it is a quality standard. Businesses that take it seriously from the start will have a competitive advantage with customers who value safety and transparency.

Unsure how the AI Act affects your business? Get in touch for an informal conversation about your AI systems and compliance needs.


Sources and further reading

  • European Parliament (2024). "Regulation (EU) 2024/1689 — Artificial Intelligence Act." eur-lex.europa.eu
  • Datatilsynet (2025). "Preparations for the AI Regulation in Norway." datatilsynet.no
  • Norwegian Communications Authority (2025). "Nkom's role in AI supervision." nkom.no
  • Norwegian Government (2025). "Norwegian implementation of the EU AI Act." regjeringen.no
  • European Commission (2025). "AI Act Implementation Timeline." digital-strategy.ec.europa.eu
  • OECD (2025). "AI Policy Observatory — Norway." oecd.ai

Need help with a software project?

We help you from idea to production — whether you need consulting, development, or a dedicated specialist.