In Penhallurick v MD5 Ltd [2021] EWHC 293 (IPEC) the Court held that the copyright in various literary works relating to software Mr. Penhallurick created during his tenure with former employer MD5 belonged to MD5. The Court found that the works were created in the course of Mr. Penhallurick’s employment with the result that MD5 was deemed the owner of the works (under the Copyright, Designs and Patents Act 1988), despite the fact that some of the work was done from Mr. Penhallurick’s home, outside normal office hours and using his own computer.

Continue Reading UK Court Rules on Copyright over Software Developed Whilst Working at Home

On February 4, 2020, the United Kingdom’s Centre for Data Ethics and Innovation (“DEI”) published its final report on “online targeting” (the “Report”), examining practices used to monitor a person’s online behaviour and subsequently customize their experience. In October 2018, the UK government appointed the DEI, an expert committee that advises the UK government on how to maximize the benefits of new technologies, to explore how data is used in shaping peoples’ online experiences. The Report sets out its findings and recommendations.
Continue Reading Centre for Data Ethics and Innovation publishes final report on “online targeting”

On July 25, 2019, the UK’s Information Commissioner’s Office (“ICO”) published a blog on the trade-offs between different data protection principles when using Artificial Intelligence (“AI”).  The ICO recognizes that AI systems must comply with several data protection principles and requirements, which at times may pull organizations in different directions.  The blog identifies notable trade-offs that may arise, provides some practical tips for resolving these trade-offs, and offers worked examples on visualizing and mathematically minimizing trade-offs.

The ICO invites organizations with experience of considering these complex issues to provide their views.  This recent blog post on trade-offs is part of its on-going Call for Input on developing a new framework for auditing AI.  See also our earlier blog on the ICO’s call for input on bias and discrimination in AI systems here.


Continue Reading ICO publishes blog post on AI and trade-offs between data protection principles

On June 3, 2019, the UK Information Commissioner’s Office (“ICO”), released an Interim Report on a collaboration project with The Alan Turing Institute (“Institute”) called “Project ExplAIn.” The purpose of this project, according to the ICO, is to develop “practical guidance” for organizations on complying with UK data protection law when using artificial intelligence (“AI”) decision-making systems; in particular, to explain the impact AI decisions may have on individuals. This Interim Report may be of particular relevance to organizations considering how to meet transparency obligations when deploying AI systems that make automated decisions that fall within the scope of Article 22 of the GDPR.

Continue Reading AI Update: ICO’s Interim Report on Explaining AI

On May 1, 2019, the UK’s Department for Digital, Culture, Media and Sport (“DCMS”) launched a public consultation (“Consultation”) regarding plans to pursue new laws aimed at securing internet connected devices. The Consultation follows the UK’s publication of its final Code of Practice for Consumer IoT Security (“Code of Practice”) last October (the subject of another Covington blog available here) and is targeted at device manufacturers, IoT service providers, mobile application developers, retailers and those with a direct or indirect interest in the field of consumer IoT security.

Continue Reading IoT Update: The UK Announces Plans for New Connected Device Laws

Earlier this month, the UK’s Information Commissioner’s Office published a draft code of practice (“Code”) on designing online services for children. The Code  is now open for public consultation until May 31, 2019. The Code sets out 16 standards of “age appropriate design” with which online service providers should comply when designing online services (such

On January 23, 2019, the UK’s Competition and Markets Authority (“CMA”) announced that it had secured undertakings from 16 social media influencers, including well-known names such as Ellie Goulding, Rosie Huntington-Whiteley and Rita Ora, that commit each influencer to increased transparency when they promote or endorse brands or services on social media on behalf of businesses.

The CMA stressed that applicable UK consumer law requires that it be made clear when posts are sponsored (i.e., paid or incentivized).  The CMA also disclosed that it has sent warning letters to other (unidentified) influencers and celebrities, and indicated it will continue to consider the role of social media platforms in this issue.

This enforcement action, together with the CMA’s recent success in court against secondary ticketing website Viagogo, and more recent threat to take Viagogo to court again, is evidence that consumer protection enforcement remains high on the CMA’s agenda.

Below, we summarise key elements of the undertakings in more detail, and also refer to further available UK regulatory guidance on how to advertise on social media.


Continue Reading UK Consumer Protection Regulator (“CMA”) Extracts Undertakings from Social Media Influencers to Increase Transparency in Sponsored Posts

Following an informal consultation earlier this year – as covered by our previous IoT Update here – the UK’s Department for Digital, Culture, Media and Sport (“DCMS”) published the final version of its Code of Practice for Consumer IoT Security (“Code”) on Oct. 14, 2018. This was developed by the DCMS in conjunction with the National Cyber Security Centre, and follows engagement with industry, consumer associations, and academia. The aim of the Code is to provide guidelines on how to achieve a “secure by design” approach, to all organizations involved in developing, manufacturing, and retailing consumer Internet of Things ‘IoT’ products. Each of the thirteen guidelines are marked as primarily applying to one or more of device manufacturers, IoT service providers, mobile application developers and/or retailers categories.

The Code brings together what is widely considered good practice in IoT security. At the moment, participation in the Code is voluntary, but it has the aim of initiating and facilitating security change through the entire supply chain and compliance with applicable data protection laws. The Code is supported by a supplementary mapping document, and an open data JSON file which refers to the other main industry standards, recommendations and guidance.  Ultimately, the Government’s ambition is for appropriate aspects of the Code to become legally enforceable and has commenced a mapping exercise to identify the impact of regulatory intervention and necessary changes.


Continue Reading IoT Update: The UK publishes a final version of its Code of Practice for Consumer IoT Security

Reflecting evidence from 280 witnesses from the government, academia and industry, and nine months of investigation, the UK House of Lords Select Committee on Artificial Intelligence published its report “AI in the UK: ready, willing and able?” on April 16, 2018 (the Report). The Report considers the future of AI in the UK, from perceived opportunities to risks and challenges. In addition to scoping the legal and regulatory landscape, the Report considers the role of AI in a social and economic context, and proposes a set of ethical guidelines. This blog post sets out those ethical guidelines and summarises some of the key features of the Report.
Continue Reading AI Update: House of Lords Select Committee publishes report on the future of AI in the UK

The UK House of Lords Select Committee on Communications has recently opened a Public Consultation on ‘The Regulation of the Internet’, with submissions being accepted until Friday 11 May. The Call for Evidence can be accessed here.

The nine questions posed are relatively broad in scope, including: whether there is a need to introduce