Spain will proceed with its own regulations to enhance digital environment safety, as announced by the Minister for Digital Transformation, Óscar López. This initiative includes limiting high-risk AI applications and compelling social media platforms to offer greater transparency regarding their algorithmic operations. The underlying political rationale is clear: the profits of a few tech giants should not supersede the rights of millions of users.
The Spanish proposal addresses several key areas. The most prominent is the protection of minors, with the government aiming to prohibit social media access for individuals under 16. This measure, previously announced by Pedro Sánchez in February 2026, is linked to more stringent age verification systems. The government also plans to bolster controls on social networks and online platforms as part of a “European digital sovereignty” strategy, which includes age verification on social and adult content sites.
Spain Tackles AI Regulation, Social Media, Hate Speech Monitoring, and Age Verification
Beyond child protection and age verification, a second significant aspect focuses on holding platforms and their executives accountable. The government intends for tech executives to be directly liable for the presence of illegal or harmful content, particularly hate speech and abuses related to generative AI. The minister emphasized that online anonymity should not shield criminal activity, aligning with previous government efforts to combat deepfakes, manipulated images, and unauthorized use of voice or likeness.
The third pillar directly concerns artificial intelligence. In March 2025, Spain approved a draft law for the ethical use and governance of AI, designed to align national legislation with the European AI Regulation. This law aims to foster AI that is “ethical, inclusive, and beneficial,” while also identifying prohibited practices, high-risk systems, and transparency obligations.
Meanwhile, the European Commission is intensifying its pressure on social media platforms like TikTok, Instagram, Facebook, and X regarding addictive design features. Ursula von der Leyen has proposed new measures to protect minors from “infinite scroll” and is advocating for the elimination of auto-playing videos, cyberbullying, and harmful content removal. Brussels is also collaborating with a special panel on online child safety, expected to deliver recommendations before summer 2026, including potential harmonized age restrictions for social media.
Spanish Regulation Leverages the EU’s Digital Services Act (DSA)
The European Commission reiterates that platforms accessible to minors must safeguard their physical and mental well-being, privacy, and security. Regulatory pressure has escalated due to recent incidents involving deepfakes and AI-generated material, following a plea from a Spanish activist who was a victim of AI-generated fake images, calling for tougher rules and mechanisms to trace anonymous accounts involved in digital violence.
For technology companies, the implications extend beyond reputation to operational and economic aspects. New obligations could entail algorithmic audits, stricter age verification mechanisms, labeling of AI-generated content, expedited removal of illegal material, restrictions on recommendation systems, and increased exposure to sanctions. The industry has voiced opposition to such regulations, citing administrative burdens, risks to innovation, and regulatory fragmentation.
In terms of political stance, Spain aims to be at the forefront of Europe’s more interventionist approach to digital policy. The government’s strategy is driven by three main arguments: child protection, combating AI-generated abuses, and democratic oversight of platforms with significant social influence. The core argument is that social media can no longer be considered mere neutral intermediaries when their algorithms amplify content, shape public attention, and expose minors to addictive or harmful dynamics.
Ultimately, the real challenge lies in practical implementation. Prohibiting social media access for those under 16 necessitates robust verification systems, which face privacy, data security, and evasion issues. Holding executives accountable will require precise definitions of when a platform has failed its duty of care. Demanding algorithmic transparency may conflict with trade secrets, system security, and the technical complexity of recommendation models. Furthermore, Europe has reminded Pedro Sánchez that it holds the ultimate legislative power over social media regulation.
