2026 looks set to be another busy year for those working in data privacy, AI and digital governance. More legal requirements, an explosion in new technologies, a strong drive for innovation and growth, and increasingly unstable global geopolitics.
The close link between geopolitics and a country’s approach to laws and regulation is nothing new. But, with rising trade barriers, growing distrust between nations, the fragmentation of international alliances, a global race for AI-dominance, and a desperate search for economic growth, geopolitics has been pushed to the very top of the agenda for 2026.
With global instability as a (slightly frightening) backdrop, we have pulled together five key trends, developments and practices in data privacy, AI and digital governance that could define 2026.
1. Data transfers as a geopolitical tool
There was perhaps a time where we looked at international data transfers (and the rules restricting them) as being based on considerations like data protection and privacy, or the rights of individuals. Yes, some of those considerations may still be in play, but what we are now seeing is clear evidence of global geopolitics and economics being the primary factors determining how (and to where) personal data can be transferred. In the same way that we have seen restrictions on data transfers as a means of imposing non-tariff barriers between states, or bans on transfers of personal data to “foreign adversaries” or “countries of concern”, we have also seen data privacy regimes being recognised as adequate recipients of personal data based largely (if not exclusively) on good relations between foreign governments (as recently seen between the USA and Argentina).
While the traditional approach to adequacy decisions and international transfers is still relevant in 2026 (jurisdictions around the world continue to develop privacy laws based on the GDPR, in the hope of fast-tracking adequacy (such as Bermuda, whose comprehensive data protection law came into force on 1 January 2025)), we expect data flows to continue to be weaponised (through restriction or liberalisation) for geopolitical or diplomatic reasons.
2. A fierce battle between AI innovation and regulation
AI technologies are being held up as one of the biggest drivers of economic growth across much of the world. This is most obvious in the United States, the European Union and China (with an honourable mention for Japan and South Korea in recognition of their leading legislative approach to AI regulation). As each of these competitors vies to lead the world in AI, we are moving into an era of supercharged global competition for AI dominance. This, in turn, has led to differing approaches to AI regulation, as jurisdictions look to balance their own geopolitical aims and principles against their perceived need to innovate at breakneck speeds.
The EU has looked at pausing the AI Act, while moving gently towards broader deregulation through the Digital Omnibus. There is huge (and increasingly) deregulatory pressure coming from the Federal Government in the United States. All the while, a pragmatic, explicitly innovation-friendly approach to AI regulation looks to be taking hold in APAC (see Australia, Japan, South Korea). Exactly where we will end up is hard to predict, but with Brussels taking a moment to reflect on the need to “streamline regulations”, “bring relief to businesses” and “stimulate competitiveness”, it is hard to see a 2026 where broad, comprehensive AI laws meaningfully proliferate.
3. More privacy laws, more (and different) enforcement
In the past year we have seen new privacy laws (or significant new legal obligations) in Bangladesh, Bermuda, Chile, Egypt, India, Nigeria, Malaysia, Peru, Vietnam and of course the US states (to name but a few!). So while AI and digital governance have become more important, privacy is not dead! In 2026, privacy laws will continue to be key compliance frameworks around the world, whether for more traditional privacy compliance issues, or for more innovative AI projects. Even where a jurisdiction enacts a standalone AI law alongside a privacy law, this does not remove from the fact that AI solutions all effectively involve personal data to some extent, which in turn will trigger a raft of traditional privacy considerations.
The enforcement of privacy laws should also give pause for thought: we have seen a continuation of active regulatory enforcement worldwide (we tracked nearly 1,000 enforcement actions across 130 jurisdictions in 2025, totalling over $3.5 billion in penalties). But, within these actions, we have also seen regulators relying more readily on new (and slightly more collaborative) measures beyond simple fines (administrative warnings, settlements, sandboxes, etc.). Helpful? Hopefully.
4. Crossovers, confusion and uncertainty across global digital regulations
With the desks of privacy pros becoming ever more congested with questions about data protection and AI (not to mention queries about the exact difference between the multitude of European laws with “data” or “digital” in their title), 2026 brings with it a need to cut out noise and focus on what matters. While the conversation around digital regulation is moving towards simplification and streamlining, there is still a prevailing feeling of complexity and confusion around how to navigate the changes, the overlaps and the gaps between different frameworks.
With that in mind, there should be a focus on cutting through news, conjecture and opinion pieces, and looking strictly at what an organisation actually needs to be doing. This means tracking concrete developments, regulatory consultations, laws and guidance pieces. This means preparing for, and reacting to hard facts, rather than being swayed by vague concepts like “direction of travel”. Reduce the noise, respond to facts.
5. Children’s privacy in the spotlight
With innovative technologies bringing increased societal and political focus on privacy considerations generally, and with concerns around the impact of these technologies on children reaching new heights, 2026 could be a pivotal year for kids’ privacy. Whether we are looking at the UK, at Australia, or at the USA, there are now clear global moves across the political spectrum to tackle issues related to online safety, age verification, social media use, and the use of mobile devices in schools. The trend here is clear, and it is now surely a question of when, not if, we see the introduction of more rules, regulations and laws around children’s data, specific approaches to age assurance, and to kids’ privacy more broadly.
These points are based on, and inspired by, an aosphere webinar hosted by the IAPP on 20 January 2026.
How Rulefinder Data Privacy can help
Rulefinder Data Privacy subscribers hear about these and other privacy law developments as soon as we cover them.