Mark Zuckerberg’s Meta is moving forward with its controversial plan to utilize millions of Facebook and Instagram posts from UK users to train its artificial intelligence (AI) technology, a practice that would be illegal under the privacy laws of the European Union (EU).
Meta announced that it had engaged constructively with the UK’s Information Commissioner’s Office (ICO) regarding the proposal.
This follows a temporary halt in both the UK and the EU in June, after the ICO cautioned tech companies to respect users’ privacy when developing generative AI systems.
Although Meta has resumed its plan, the ICO clarified on Friday that it has not given formal regulatory approval.
Instead, it will monitor the initiative following adjustments Meta made to its approach, such as simplifying the process for users to opt out of having their posts used for AI training.
Privacy advocates, including groups like the Open Rights Group (ORG) and None of Your Business (NOYB), have expressed serious concerns about Meta’s plans.
When these proposals were initially discussed, ORG criticized Meta for “turning all of us into unwilling (and unpaid) test subjects for their experiments.” Along with NOYB, they called on the ICO and EU regulators to block the initiative.
For now, Meta’s plans remain suspended in Europe. The company has accused the EU of stifling AI innovation by preventing the use of EU citizens’ posts for AI model training.
However, on Friday, Meta confirmed that it would proceed with using publicly shared posts from UK Facebook and Instagram users to train its AI models.
Meta emphasized that private messages and content from users under the age of 18 will not be included in this data collection.
In a statement, Meta explained: “This ensures that our generative AI models will incorporate British culture, history, and language, enabling UK companies and institutions to benefit from the latest advancements in technology.
At Meta, we are building AI to reflect diverse global communities, and we look forward to expanding it to more countries and languages later this year.”
Stephen Almond, the ICO’s executive director for regulatory risk, stressed the importance of transparency when using users’ data for AI training.
He said: “We have consistently stated that any organization using personal data to train generative AI models must be transparent about how that data is used.”
Almond further advised that organizations should implement appropriate safeguards before using personal data for model training, including offering a straightforward way for users to object to their data being processed.
He added: “The ICO has not granted regulatory approval for this data processing, and it is Meta’s responsibility to ensure and demonstrate its ongoing compliance with relevant regulations.”
Leave a Reply