December 2025
Content, Media and Technology
High Court issues judgment in Getty Images v Stability AI on AI-related trade marks and copyright
On 4 November 2025, the High Court (ChD) issued judgment in Getty Images (and others) v Stability AI ([2025] EWHC 2863 (Ch)), addressing trade mark infringement, passing off and secondary copyright infringement arising from the Stable Diffusion generative AI models.
The Court held Stability AI was not legally responsible for the publication of Stable Diffusion v1.x via CompVis GitHub and Hugging Face pages, but was responsible for releases via its own API/Developer Platform and DreamStudio. The Court found real-world UK use had generated watermarked outputs.
On section 10 TMA: under section 10(1), double identity infringement was established only for the iStock word marks by v1.x when accessed via the Developer Platform and DreamStudio, based on specific examples; the claim failed for the Getty Images marks. Under section 10(2), likelihood of confusion was established for iStock marks by v1.x (Developer Platform and DreamStudio) and for the Getty Images marks by v2.1 on specific examples. The section 10(3) claim (dilution, tarnishment, unfair advantage) failed. The Court declined to determine passing off. No additional damages were awarded.
The secondary copyright infringement claim was dismissed. The court held that although an article can be intangible, Stable Diffusion’s model weights are not an infringing copy because they do not and never have stored the copyright works. The court rejected reliance on section 27 to deem the model weights an infringing copy based solely on the Stable Diffusion tool using infringing inputs, finding sections 22 and 23 target dealings in articles that are copies, not products made with the benefit of copies. The court indicated any copyright infringement finding would have concerned only downloadable model weights supplied in the UK, not cloud-hosted models on non-UK servers. The Court made no finding on the number of works used in training.
Find out more about the trade mark ruling here and the copyright ruling here.
ASA publishes Black Friday social responsibility guidance
On 23 October 2025, the Advertising Standards Authority ('ASA') published guidance on upcoming Black Friday advertising, focusing on responsible marketing to vulnerable consumers and clarity of promotions.
The ASA upheld complaints in 2024 credit and loan ads that encouraged people to spend more than they could afford around Black Friday, including a credit ad normalising non-essential purchases and a loan ad trivialising borrowing through emphasis on speed and ease, which the ASA banned.
The ASA found urgency claims for cosmetic procedures irresponsible where they pressured consumers to make decisions without taking the required time and thought to consider the procedures. It reiterated that savings must be genuine and substantiated, 'up to' claims must reflect significant coverage, promotions should not be extended, demand should be reasonably estimated, and significant conditions must be clear.
Find out more about the latest ASA rulings and advertising compliance requirements in our Q3 2025 ASA rulings roundup here.
International Association of Privacy Professionals publishes an analysis on lawful AI training under the GDPR
On 5 November 2025, the IAPP published an analysis on legal bases and constraints for training AI models with personal data under the GDPR and the EU AI Act. The piece highlights Meta’s plan to train models on EU users’ public posts, relying on legitimate interests with an objection window, and NOYB’s cease and desist and contemplated class action.
The analysis contrasts first-party or defined third-party datasets with web scraping, noting the EDPB’s two sourcing approaches. It states consent is feasible for first-party data but impractical for large-scale scraping. It cites multiple DPA enforcement actions against Clearview AI in France, Greece, Italy, the Netherlands and Sweden, with fines ranging from EUR 250,000 to EUR 30.5 million and orders to delete data.
It explains that organisations often invoke legitimate interests for purposes such as conversational agents, fraud detection and threat detection, but face Article 9 risks where special categories are processed. It notes the need to exclude such data or rely on Article 9(2), including manifestly made public data, and flags transparency and objection challenges.
The analysis observes that Article 22 typically does not cover generative AI outputs but may be engaged in use cases like facial recognition onboarding, where explicit consent may be required. It notes debate on whether Article 10(5) of the EU AI Act allows limited special category processing strictly for bias detection in high-risk systems, pending clarification by the EC or EDPB.
Find out more on AI training and Data Protection from Victoria Horden here.

