Safety and accountability take centre stage
Regulators are moving from discussion to action on harmful AI outputs:
- The UK's new deepfake law makes creating non-consensual intimate imagery via AI a criminal offence, putting pressure on platforms like Grok to enforce stronger safeguards. :contentReference[oaicite:21]{index=21}
- Ofcom's formal probe into xAI's systems indicates regulators are watching not just outcomes but the platforms that enable misuse. :contentReference[oaicite:22]{index=22}
- Elon Musk's public resistance frames the issue as a tension between safety mandates and platform freedom, a debate likely to ripple across other jurisdictions. :contentReference[oaicite:23]{index=23}
For executives and developers, this moment underscores that AI governance isn't theoretical: laws are shaping platform practices and risk profiles now.
