Home » today » Technology » Labels for images created by artificial intelligence are arriving on Facebook and Instagram

Labels for images created by artificial intelligence are arriving on Facebook and Instagram

Labels to distinguish AI-generated images that come post on Facebook, Instagram and also on Threads: Meta, the company that controls 3 of the most popular social networks, is working on this. Which also will punish users who will not openly declare that they have shared videos made in this way.

According to what has been explained, the novelty should be visible “over the next few months”, almost certainly “in every language in which the various apps are available”. Not only that: Nick Clegg, who serves as president of Global Affairs at Meta, announced that the company is also working with other companies in the sector to arrive at a common standard that allows the identification of this type of images.

The guide

How to recognize fake news and how to defend yourself in the age of artificial intelligence

by Emanuele Capone


How to recognize an image made with AI

The point lies right here, in being able to understand what is real and what is artificial: Meta already tags as Create con l’IA the images generated with its artificial intelligence (which is called Meta AI), as do many others, from Adobe for those created within Photoshop, to OpenAI for those made with Dall-E3. The difficulty lies precisely in identifying those built outside its field of action, and it is an even greater difficulty for Meta, given that many of the images shared on Facebook, Instagram and Threads come from external sources.

Because of this, Clegg ha explained online (Who) that Meta will work to label “images of Google,OpenAI, Microsoft, Adobe, Midjourney and Shutterstock,” working with them on the most effective strategies for “adding metadata to images created by their tools.” It was from Google that the announcement came last year that such labels would be arriving on YouTube and its other platforms: “In the coming months, we will introduce labels that will inform viewers when the content they are seeing is artificially generated,” YouTube CEO Neal Mohan reiterated recently.

At the moment there are no details on what the tools will be technologies used to carry out this work, but experience shows that it will probably be software based on other artificial intelligences: such as su Italian Tech we have already explained, AIs are very good at recognizing content created by other AIs. It’s something that works with images (the reflexes in the corneas, the details of the hands and feet, the ombre) but which can also be used with texts, so much so that today there are already tools that allow you to verify their authenticity and human origin or not (Who an example of use with the songs of Sanremo).

A path to understand how to use AI in professional contexts. The IT Academy Artificial Intelligence for Business master’s degree

ChatGPT, Dall-E and fake news: this is how OpenAI is preparing for the elections

by Bruno Ruffilli



Why it is important to distinguish images made with AI

That’s not all, because Clegg also explained that Meta will soon start requiring users to openly declare that they have shared video or audio made with artificial intelligence: otherwise, “the range of sanctions that will be applied will cover the entire range of possibilities, from warnings to the removal” of the offending post.

Meta’s decisions come this year, as Clegg pointed out, because This year “a number of important elections will take place around the world and it is therefore fundamental to distinguish the true from the false, even when talking about images. For some time now (at least 5 years), humanity has been dealing with the so-called deepfakei.e. photos, videos and audios that are perfect fakes built with AI and can deceive people and contribute to the spread of fake news and disinformation.

Images like that of Pope with the duvet white, or Donald Trump arrested in New York, they are there to demonstrate the potential danger of these tools. A danger that the Biden administration has also realized in the United States last October has signed an executive order to invite companies to take action to ensure safe, informed and reliable use of AI. And a danger that even the protagonists of the sector, who gathered in the event, are aware of Coalition for Content Provenance and Authenticity (abbreviated as C2PA), precisely to establish standards for the correct use of AI. Starting with the images.

@capoema


#Labels #images #created #artificial #intelligence #arriving #Facebook #Instagram
– 2024-03-28 16:06:29

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.