
§ 01 · The Setup
A regulator surveyed 234 banks, insurers, and asset managers about AI. The trade press read the press release.
In December 2024, Finansinspektionen, the Swedish financial supervisory authority, published a working report on the use of artificial intelligence across the firms it supervises. The data underneath the report is unusually rich for the genre. Two hundred and thirty-four firms responded — banks, insurers, payment institutions, asset managers, fund managers, securities companies, electronic money issuers, clearing organisations, and stock exchanges. The response rate was 83 per cent of those surveyed. The report covers employee use of public generative AI tools, in-house AI deployment in production and pilot, sector-by-sector and size-by-size adoption breakdowns, intended investment direction over the coming twenty-four months, and explicit preparation for the EU AI Regulation that becomes applicable on 2 August 2026.
The report was covered briefly by the Nordic financial trade press, mostly through reproduction of the headline statistic that 84 per cent of firms now have employees using generative AI tools. That is, indeed, the most quotable number in the report. It is also, by a substantial margin, not the most interesting one. The interesting numbers sit further inside the document — in the gap between employee adoption and institutional readiness, in the unequal distribution of AI deployment across sub-sectors, in the explicit acknowledgement that most Swedish financial firms believe they are lagging international competitors, and in the regulatory exposure that the survey reveals about specific use cases that will be classified as high-risk under the AI Regulation.
This article is a structural read of the underlying data. We are not reproducing the headline finding the trade press already covered. We are looking at the report as a primary source on how a heavily-regulated industry actually absorbs a fast-moving technology — what gets adopted first, what stays at the pilot stage, where the institutional friction sits, and what the regulator’s framing implies about the supervisory expectations that are about to become operational reality. The Finansinspektionen survey is one of the most useful empirical documents on AI adoption in financial services published by any European regulator to date. It deserves more careful reading than it received.
— The Headline Numbers, Read Together —
84%
Firms with employees using generative AI tools
22%
Firms with AI systems in production or development
32%
Firms with a written GenAI policy for employees
41%
AI-using firms with a formal AI development policy
The four numbers, read together, describe a regulated industry where employee adoption has run far ahead of institutional governance. That gap is the story.
§ 02 · The Governance Gap
Employee adoption ran ahead of institutional readiness. Now the regulator is catching up.
The single most consequential finding in the Finansinspektionen survey is the gap between two adjacent statistics. Eighty-four per cent of firms have employees using generative AI tools as part of their work. Thirty-two per cent of firms have a written policy governing how those tools may be used. The remaining 52 percentage points are firms whose employees are pasting client information, internal documents, draft communications, code, and analytical outputs into commercial AI systems — without explicit institutional rules about what is and is not appropriate. This is not, on its own, evidence of widespread misuse. It is evidence that the institutional governance layer has not caught up to the operational reality, in an industry where the institutional governance layer is normally the differentiating asset.
The gap looks even sharper when you isolate the firms that have moved beyond employee tooling into actual AI deployment. Among the 22 per cent of firms with AI systems in production or development, only 41 per cent report a formally approved policy for AI development and use. The remaining 59 per cent are deploying AI in regulated business processes without a documented policy framework governing how those systems should be built, validated, monitored, or retired. Another 27 per cent of those firms say they are working on a policy but have not yet adopted one, and 30 per cent report no policy and no plan to draft one. From a supervisory perspective, those numbers describe an institutional readiness deficit that is going to become regulatorily expensive when the AI Regulation becomes operationally enforceable.
There is a structural reason this happened. Generative AI tools entered the workplace through a different door than every previous wave of enterprise software. Traditional financial-services technology adoption follows a predictable path: business case, vendor selection, procurement review, IT integration, security review, compliance review, training, rollout. The cycle takes nine to eighteen months and produces an institutional artifact — the policy document, the access control list, the audit log — before a single user touches the system. ChatGPT and its competitors arrived through the consumer browser. The cycle was zero. By the time the policy committee scheduled its first meeting, employees had been using the tools for six months.
That is not a uniquely Swedish phenomenon and it is not a uniquely financial-services phenomenon. What makes the Swedish data interesting is that we now have empirical confirmation of how large the gap is, in an industry that takes governance seriously. The same pattern almost certainly holds at higher magnitude in industries that take governance less seriously. The Finansinspektionen number — 32 per cent with policies in place — is probably the upper bound of governance readiness across the broader European economy, not the lower bound.
| Sub-sector | In Production | Pilot or Experimenting | No Use, No Plan |
|---|---|---|---|
| Banking & Credit Institutions | 38% | 38% | 24% |
| Insurance | 27% | 41% | 32% |
| Payments | 24% | 43% | 32% |
| Securities Markets | 16% | 51% | 33% |
| All Sectors (Survey Average) | 22% | 46% | 32% |
Banking leads on production deployment by a wide margin. Securities markets lead on experimentation but lag on production. The 32 per cent “no plan” share is consistent across sub-sectors, suggesting institutional resistance is structural rather than sector-specific.
§ 03 · Sector Asymmetry
Banks build. Securities firms experiment. The asymmetry is informative.
The sub-sector breakdown in the Finansinspektionen data is the part of the report that received almost no trade-press attention and is, by some distance, the most operationally informative part of the document. Banks and credit institutions report 38 per cent of firms with AI systems in production or development — nearly double the survey average of 22 per cent and more than double the figure for securities markets at 16 per cent. The same banking sub-sector also reports the smallest share of firms with no AI use and no plan, at 24 per cent. By any reading, banking is the part of the Swedish financial sector that has actually moved AI from concept to operational reality at scale.
Securities markets show the opposite pattern. Only 16 per cent of securities firms report AI in production, but 51 per cent are running pilots or experiments — the highest experimentation rate in the survey. The combined “in production plus piloting” figure for securities firms is 67 per cent, the same total proportion as banking, but the distribution is dramatically different. Securities firms are exploring AI; banks are deploying it. The gap is not about awareness or interest. It is about institutional appetite to take the operational, regulatory, and reputational responsibility of putting an AI system into a regulated production process.
There is a straightforward operational explanation for the asymmetry. Banks have the largest existing data infrastructure investment of any sub-sector, the deepest historical experience with traditional machine learning in credit scoring and fraud detection, and the most structured supervisory dialogue with Finansinspektionen on the topic. They were the natural first movers. Securities firms have a higher proportion of relatively recent fintech entrants, smaller average headcount, and trading workflows where the cost of an AI failure is concentrated and immediate — which encourages experimentation in research and back-office workflows but caution in anything that touches order execution. The asymmetry, in other words, reflects rational sub-sector economics, not differential ambition.
The implication for the next twenty-four months is that the production deployment gap between banking and the rest of the sector is likely to widen rather than close. Banks reported the highest planned investment increases across every AI category, and the firms already running AI in production are also the firms most heavily preparing for the AI Regulation. The firms that are still experimenting will face a harder transition: they will need to mature their pilot infrastructure, document their development processes, and build supervisory-grade governance frameworks essentially in parallel. The market will not be patient with that timeline.
The headline is that 84 per cent of firms have employees using generative AI. The story underneath is that 22 per cent have built anything with it, and only 41 per cent of those have a policy governing what they have built.
MMD Newswire · Editorial Read
§ 04 · Where AI Is Actually Being Used
Search, summarise, automate. The unromantic majority of real deployments.
The 44 firms in the survey with concrete AI use cases reported details on 83 specific deployments and aggregate information on a further 184. The most common categories — in declining order — are searching for or summarising information, process automation, customer insights, chatbots and virtual assistants, customer support, marketing and sales, fraud detection, text content generation, anti-money-laundering and counter-terrorism financing, software code generation, credit risk models, and translation. The pattern is unmistakable: the use cases that have actually moved into production are the ones with the lowest regulatory and reputational risk and the most direct productivity benefit. The dramatic, transformational use cases that dominate consultant decks — algorithmic trading, autonomous risk pricing, end-to-end credit decisioning — are conspicuously rare in the actual deployment data.
The technology mix underneath the use cases also flips a common assumption. Generative AI accounts for 45 per cent of the reported deployments and traditional machine learning for 41 per cent, with other AI technologies making up the remaining 14 per cent. Generative AI has, in less than three years, overtaken machine learning that financial firms have been operating in production for over a decade. That is a remarkable rate of adoption for a regulated industry, and it is exactly the kind of compositional shift that creates governance risk — the institutional muscle memory for evaluating, validating, and monitoring traditional ML is mature, and the equivalent muscle memory for generative AI is still being built. The risk is not that generative AI is being deployed in high-stakes use cases; the survey shows it largely is not. The risk is that the supervisory frameworks for evaluating it are still under construction even as deployment volume grows.
Most of the production-grade generative AI deployments in the data are extensions of existing communications and information-processing workflows. Firms are using AI to summarise public press releases, news items, and longer documents into formats suitable for social media and internal distribution. They are using AI to generate code through external services. They are using AI as a wrapper around customer support workflows. The infrastructure underneath these systems is, in every case the survey describes, a third-party generative AI service rather than internally trained models — which means the operational economics are exposed to the underlying provider’s pricing decisions and rate limits in ways that are not always visible at deployment time.
| Use Case | Reported Deployments | Editorial Read |
|---|---|---|
| Search & summarise information | 28 | The clear leader; low risk, high productivity, generative-AI native |
| Process automation | 27 | Largely traditional ML; new generative AI layers being added |
| Customer insights | 22 | Pattern recognition on structured customer data; established technique |
| Chatbots & virtual assistants | 20 | The most visibly customer-facing deployment category |
| Customer support & help desk | 16 | Increasingly using generative AI as the response engine |
| Marketing & sales | 13 | Content generation and personalisation workflows |
| Fraud detection | 12 | Mature traditional ML; generative-AI augmentation emerging |
| Text content generation | 11 | Mostly summarisation of public material for redistribution |
| AML & counter-terrorism financing | 10 | The regulator’s preferred AI use case; well-established |
| Software code generation | 10 | Almost entirely external services rather than in-house models |
| Credit risk models | 9 | Watch this one — will be high-risk under the AI Regulation |
| Translation | 9 | Largely commodified; minimal regulatory exposure |
The use case profile is unromantic and exactly what a regulator hopes for at this stage of adoption: the high-impact, low-risk applications are leading. Source: Finansinspektionen AI Survey 2024, 83 detailed use cases reported.
§ 05 · The High-Risk Exposure
Fourteen firms expect to operate high-risk AI under the new regulation. The number will grow.
The AI Regulation classifies AI systems into risk tiers and imposes substantially heavier compliance obligations on systems classified as high risk. Within financial services, the regulation explicitly identifies two categories as high-risk: AI systems used to evaluate creditworthiness or establish credit scores of natural persons, and AI systems used for risk assessment and pricing in life and health insurance. The Finansinspektionen survey asked firms to self-classify their existing and planned use cases against this framework. Four per cent of respondents reported existing use cases they assess as high-risk, all in creditworthiness assessment. Fourteen firms reported it as probable or highly probable that they will operate a high-risk use case within the next twenty-four months — covering both creditworthiness and life and health insurance pricing.
Fourteen firms is a small absolute number, but the number understates the regulatory weight of the underlying exposure. High-risk AI systems under the regulation must satisfy a comprehensive set of obligations: risk management systems across the lifecycle, data governance covering training data quality, technical documentation, automated logging, transparency to deployers and end users, human oversight, accuracy and cybersecurity standards, conformity assessment before market deployment, and ongoing post-market monitoring. The compliance burden is substantial, and it falls on firms whose existing AI governance — as the survey itself documents — is in many cases not yet mature. The fourteen firms who already see themselves heading into this category are the leading edge. The firms behind them, who will discover their use cases qualify as high-risk closer to the August 2026 application date, will face a tighter compliance timeline.
The survey’s most quietly important data point sits in the responses about preparation. Sixty-eight per cent of firms with AI in production have begun preparing for the AI Regulation, and a further 23 per cent say they will start within three months. That leaves only 9 per cent of production-AI firms who reported neither current preparation nor near-term plans. Among firms only piloting AI, the readiness picture is different and worse: a meaningful share of pilot-stage firms believe their use cases will not be subject to the regulation, an interpretation the survey notes is “in many cases questionable.” Finansinspektionen is signalling, politely, that there will be supervisory follow-up with firms whose self-assessment of regulatory exposure does not survive contact with the actual text of the regulation.
The Nordic compliance picture is further complicated by the operational reality that many Swedish financial firms operate cross-border into the UK, the broader EU, and increasingly into the US. AI governance and disclosure frameworks differ across jurisdictions in ways that are not fully harmonised, and the practical work of mapping a single AI system against multiple regulatory regimes is non-trivial. Nordic accounting and advisory practices — firms like Sveago that handle cross-border invoicing and compliance work for Nordic technology and financial services firms expanding internationally — have started to see meaningful rising demand for AI-related advisory engagements as part of broader regulatory readiness work, particularly from firms that will need to satisfy both the AI Regulation and adjacent regimes simultaneously.
§ 06 · The Infrastructure Layer Underneath
The compute economics that don’t appear in the survey but shape every deployment decision.
The survey does not ask firms about the infrastructure costs underneath their AI deployments, and that absence is itself informative. A regulator focused on AI governance, risk classification, and supervisory readiness has no particular reason to ask about cloud and inference economics. But every operationally meaningful AI deployment in financial services runs on infrastructure provided by one of the three hyperscalers — Microsoft Azure, AWS, or Google Cloud — and increasingly relies on inference services from Anthropic, OpenAI, or a smaller specialised provider. Those infrastructure choices propagate into the unit economics of every deployment, the data residency posture, the latency characteristics, and the regulatory exposure profile.
For firms in the early piloting phase — the 46 per cent of the survey population — AI infrastructure costs are typically not yet a meaningful budget line because pilot volumes are low. For firms with AI in production at scale, the picture changes quickly. A mid-sized Swedish financial firm running embedded generative AI features in customer-facing or back-office processes can plausibly spend several hundred thousand kronor per month on AI inference alone, growing to materially more once usage scales. The market for recovering value from unused AI cloud and credit allocations has emerged in response to this, with brokers like AI Credit Mart matching buyers and sellers of Azure, AWS, GCP, and Anthropic credits across European and global financial services and technology buyers. None of this is captured in the Finansinspektionen survey, but it is the operational reality underneath the survey numbers.
The supervisory implication is real. A financial firm whose AI deployment economics depend on a third-party generative AI provider is exposed to that provider’s pricing decisions, rate limits, model deprecation schedules, and ultimately commercial risk. The AI Regulation places obligations on the “deployer” of an AI system as well as the provider, and a deployer whose infrastructure is concentrated in a small number of third parties is structurally less robust than one with internal capability or with diversified provider arrangements. The survey does not yet probe this dimension; it is, however, the dimension on which a meaningful share of supervisory attention is likely to focus over the next twenty-four months as production AI becomes a more material part of regulated business processes.
| Risk Category | Share of AI-Using Firms Citing | Editorial Read |
|---|---|---|
| Data quality | 63% | The leading concern, well ahead of any other category |
| Data protection | 54% | GDPR overlay; particularly acute for customer-data use cases |
| Accountability | 43% | Reflects the open question of who is responsible for AI decisions |
| Third-party risks | 42% | Vendor concentration is a real concern, especially for generative AI |
| Regulatory challenges | 39% | The AI Regulation is the dominant framework, but not the only one |
| Lack of AI competence/resources | 38% | Persistent talent constraint across the sector |
| Cybersecurity | 33% | Both AI as attack surface and AI in defensive workflows |
| Compliance | 30% | Distinct from regulatory; about the operational compliance workload |
| Explainability | 29% | Higher concern at firms operating generative-AI components |
| Model deviation & monitoring | 33% | The continuous-monitoring obligation that catches firms unprepared |
| Model validation/approval | 28% | Lower than expected; suggests immaturity of validation frameworks |
| Implementation in production | 17% | Quietly significant: nearly one in five cite execution as a top risk |
Data quality and data protection lead the risk register. The lower-cited categories — particularly model validation and implementation in production — suggest the institutional muscle memory for AI deployment is still being built. Source: Finansinspektionen AI Survey 2024.
§ 07 · What The Trade Press Missed
Three findings that didn’t make the headlines.
The first is the international competitiveness signal. Most Swedish financial firms believe they are lagging international competitors on AI adoption — only 17 per cent rate themselves as ahead, and the median self-assessment sits clearly in “behind” territory. The two reasons firms give in the free-text responses are notable. The first is that regulation accompanying new AI implementations is perceived as burdensome. The second is that many firms see themselves as small actors internationally, with limited capacity to build out an extensive AI organisation. Neither reason is fully accurate as a description of the international competitive landscape, but both are real perceptions that drive investment behaviour. The implication is that the Swedish financial sector’s AI investment trajectory is being shaped not just by genuine competitive pressure but by the firms’ mental models of how that pressure works — and those models could be improved by better data on what international competitors are actually doing.
The second is the investment trajectory asymmetry. Firms already running AI in production plan to increase their generative AI investment over the next twenty-four months at materially higher rates than firms still in the pilot stage or with no AI use. Seventy-three per cent of production-AI firms plan to increase generative AI spend; only 26 per cent of firms with no AI use plan to. The same asymmetry holds for machine learning and other AI categories. The structural implication is that the existing AI capability gap between the leading firms and the laggards is going to widen, not narrow, over the next two years. This runs counter to the common assumption that early movers are setting up later movers to leapfrog them with newer technology. The data suggests the opposite: the early movers are pulling further ahead.
The third is the policy lag. Among firms with AI in production, 30 per cent report that they have no formal AI development policy and no plans to draft one. The other 27 per cent are working on it but have not yet adopted one. Only 41 per cent have an approved policy. From a supervisory readiness perspective these numbers are concerning, but the more interesting question is what they imply about the firms themselves. A firm running AI in regulated business processes without a written governance framework is operating on individual judgement rather than institutional process. That works at small scale and breaks at large scale. It is also the kind of operational vulnerability that attracts supervisory attention the moment something goes wrong. The August 2026 AI Regulation deadline is going to surface a lot of these gaps very quickly.
— Reader Questions —
Twenty questions on the survey, answered plainly.
What is the Finansinspektionen AI survey?
A 2024 questionnaire sent by the Swedish financial supervisory authority to 278 firms under its supervision, with 234 firms responding (an 83 per cent response rate). The survey covered employee use of generative AI, in-house AI deployment, sector breakdowns, investment plans, risk identification, and preparation for the EU AI Regulation. It is one of the most empirically rich documents on AI adoption in financial services published by any European regulator.
What is the most important finding in the report?
The gap between employee adoption (84 per cent) and institutional governance (32 per cent of firms with formal generative AI policies for employees, 41 per cent of AI-using firms with formal AI development policies). Employee adoption ran far ahead of policy frameworks, and the supervisory implications of that gap are about to become operationally consequential as the AI Regulation becomes applicable in August 2026.
Are 84 per cent of Swedish banks really using ChatGPT?
More precisely, 84 per cent of Swedish financial firms have employees who use generative AI tools as part of their work. The survey does not break this down by frequency, intensity, or institutional sanction. It is best read as a measure of penetration, not depth. The headline figure has been widely repeated; the more interesting figure underneath is how many firms have governance frameworks for that use.
Why is banking ahead of other sub-sectors on AI deployment?
Banking has the largest existing data infrastructure investment of any sub-sector, the deepest historical experience with traditional machine learning in credit scoring and fraud detection, and the most structured supervisory dialogue with Finansinspektionen on the topic. They were the natural first movers. Securities firms experiment more but deploy less, reflecting the concentrated cost of AI failures in trading workflows.
What does the AI Regulation classify as high-risk in financial services?
Two categories are explicitly identified: AI systems used to evaluate the creditworthiness of natural persons or establish their credit score, and AI systems used for risk assessment and pricing in life and health insurance. Other AI use cases in the financial sector may also fall into the high-risk category depending on context, but those two are the ones the regulation specifically calls out.
When does the AI Regulation actually become operational?
The regulation entered into force on 1 August 2024, but most of its substantive obligations apply from 2 August 2026. Some prohibitions and general-purpose AI rules apply earlier. For firms operating high-risk AI systems in financial services, the August 2026 date is the practical compliance deadline, and meaningful preparation work should already be in progress.
What proportion of Swedish financial firms expect to operate high-risk AI?
Currently 4 per cent of survey respondents report existing use cases they assess as high-risk, all in creditworthiness evaluation. Fourteen firms reported it as probable or highly probable that they will operate a high-risk use case within twenty-four months, covering both creditworthiness and life and health insurance pricing. The number is likely to grow as the regulation is closer to application and firms refine their classifications.
Is generative AI really already more common than traditional machine learning?
In the Swedish financial sector, yes — but with caveats. Generative AI accounts for 45 per cent of reported deployments and traditional machine learning for 41 per cent. That is a remarkable rate of adoption for a regulated industry over less than three years. However, the survey covers in-production deployments only, and traditional ML continues to underpin many of the most operationally critical processes in the sector. Generative AI is more visible; ML remains structurally important.
What are the most common AI use cases in the survey?
Searching for and summarising information leads with 28 reported deployments, followed by process automation (27), customer insights (22), chatbots and virtual assistants (20), customer support (16), marketing and sales (13), fraud detection (12), text content generation (11), AML/CFT (10), software code generation (10), credit risk models (9), and translation (9). The leading categories are exactly the low-risk, high-productivity use cases a regulator hopes to see at this stage of adoption.
What are the biggest risks firms identify in their AI use?
Data quality leads at 63 per cent, followed by data protection (54 per cent), accountability (43 per cent), third-party risks (42 per cent), regulatory challenges (39 per cent), and lack of AI competence or resources (38 per cent). The pattern is interesting: the leading risks are operational and data-related rather than algorithmic. Firms are more worried about whether their data is fit for AI than they are about model behaviour itself.
Are Swedish financial firms prepared for the AI Regulation?
Among firms with AI in production, 68 per cent have begun preparing and 23 per cent plan to start within three months — meaning 91 per cent of production-AI firms are within three months of beginning preparation. That is reasonably high readiness. The picture is less reassuring for pilot-stage firms, where a meaningful share believe their use cases will not be subject to the regulation, an interpretation Finansinspektionen describes diplomatically as “questionable.”
What is “human in the loop” and why does it matter for AI Regulation compliance?
It refers to AI systems where a human reviews, approves, or overrides the AI’s output rather than allowing fully autonomous operation. In the survey, 57 per cent of reported use cases include human in the loop, while 19 per cent are fully autonomous. The AI Regulation places significantly greater oversight obligations on autonomous AI systems, particularly in high-risk categories, which is why most production deployments today retain human review in the decision chain.
How are firms managing the risks they identify?
The most commonly reported measures are evaluating models, services, and suppliers; introducing human oversight; investing in employee training and education; clarifying processes; testing models; monitoring data quality; and conducting other types of data control. Forty-one per cent of AI-using firms report a formally approved AI development policy, with another 27 per cent planning to draft one. The remaining 32 per cent operate without a documented policy framework.
Why do most Swedish financial firms believe they are lagging international competitors?
Two reasons emerge in the free-text responses. First, regulation accompanying new AI implementations is perceived as burdensome and slowing development. Second, many firms see themselves as small actors internationally with limited capacity to build out extensive AI organisations. Both perceptions are real even where they are not fully accurate descriptions of the global competitive landscape, and both shape investment behaviour.
Will the gap between AI leaders and laggards widen?
According to the data, yes. Firms already running AI in production plan to increase generative AI investment at materially higher rates than firms still piloting or without AI use — 73 per cent of production-AI firms plan to increase generative AI spend versus 26 per cent of firms with no AI use. The same asymmetry holds across other AI categories. Early movers are extending their lead rather than being caught by later movers.
What about explainability of AI systems?
Sixty-four per cent of firms consider their AI use cases to have high or very high explainability. Cases rated as having low or very low explainability (13 per cent) primarily use generative AI as a sub-component within larger systems. The pattern is consistent with the broader observation that traditional ML systems are easier to explain than generative AI components, but the firms running them have built up institutional comfort with the trade-offs over time.
Why does the survey not capture infrastructure costs?
Because the regulator’s mandate is supervisory and prudential, not commercial. Cloud and inference economics are not directly within the remit of a financial supervisor, even though they shape every operational AI decision underneath the supervised activity. The absence is itself informative — it points to a dimension of AI deployment that is operationally critical but supervisorially under-examined.
How concentrated is the third-party AI infrastructure market for Swedish firms?
Highly. Almost all production AI in Swedish financial services runs on top of one of three hyperscalers (Azure, AWS, GCP) and increasingly relies on inference services from a small number of providers including Anthropic and OpenAI. The concentration is not unique to Sweden; it is a global pattern in financial services. The third-party risk concern that 42 per cent of AI-using firms cite reflects this concentration directly.
Are smaller financial firms at a structural disadvantage on AI?
The survey suggests modestly so. Smaller firms are overrepresented in the “no AI use, no plan” category, and the firms that have moved farthest with AI are also planning the largest investment increases. The disadvantage is not absolute — smaller firms can adopt productivity tools and outsource specialised AI capability — but the gap on production deployment in regulated processes is widening.
What should a financial firm do if it has not yet started AI Regulation preparation?
Begin with a complete inventory of AI use across the organisation, including employee use of public generative AI tools. Apply the regulation’s risk-tier framework to each system or use case. Identify any use case that may qualify as high-risk under the regulation. Build a written governance framework covering development, deployment, monitoring, and decommissioning. The August 2026 deadline is closer than it looks once the inventory work is started.
What is the Swedish financial sector’s likely AI trajectory over the next two years?
Continued production deployment in low-risk, high-productivity categories, particularly information processing, automation, and customer support. Growing high-risk exposure in creditworthiness evaluation and life and health insurance pricing as deployments mature. Tighter institutional governance forced by AI Regulation compliance. Concentration of AI capability in firms that are already leaders, with the gap to laggards widening rather than closing. The structural picture is one of accelerating differentiation rather than catch-up.
Source · Primary Document
Finansinspektionen, AI in the Swedish Financial Sector, FI Ref. 24-18158, 6 December 2024.
Published by the Swedish Financial Supervisory Authority. Read the full report: finansinspektionen.se
— Editor’s Note —
On reading regulators slowly.
Reports from financial supervisors are an under-exploited source of business intelligence in the technology economy. They are written carefully, sourced rigorously, and produced specifically to inform institutional decision-making. They are also generally read once, summarised in a paragraph, and forgotten. The Finansinspektionen AI survey deserves longer reading than it received in the Nordic trade press, both because the underlying data is genuinely informative and because the regulator’s own framing of the data tells you something useful about how supervisors are thinking about AI in regulated industries.
MMD Newswire is editorially independent. The interpretations, framings, and structural reads in this article are our own. Readers in regulated industries should treat this as a starting framework for thinking about the survey’s findings, not a substitute for the legal and compliance work that AI Regulation readiness actually requires. The full Finansinspektionen report is publicly available from the regulator and worth reading in primary form, particularly for compliance, risk, and technology functions whose work is about to become noticeably more demanding.
