We are adding AI to more of our products and services to boost creativity and productivity. At the same time, we want to help people understand how content is created and changed. Everyone needs to have this information, so we are investing in tools and new solutions like SynthID to make it available.
We also know that working with others in the industry is essential to increase overall transparency online, as content travels between platforms. That’s why we joined the coalition of content provenance and validity (C2PA) as a steering committee member earlier this year.
Today, we want to share how we are helping to develop the least C2PA provenance technology and bring it to our products.
Improving Content Technology to Make Credentials More Secure
Provenance technology can show if a photo was taken with a camera, edited with software, or made by generative AI. This information helps users make better choices about the content they see, such as photos, videos, and audio, and builds trust and media literacy.
As a steering committee member of the C2PA, I have worked with other members to improve technology that attaches provenance information to content. In the first half of this year, Google helped develop the latest version (2.1). This version is more secure against tampering because it imposes stricter requirements for verifying content history. By strengthening these protections, we help ensure the attached data is accurate and not misleading.
Adding The C2PA Standard To Our Products
In the next few months, we will add this new version of content credentials to some of our main products:
- Search: If an image has C2PA metadata, people can use our “About this image” feature to check if it was created or edited with AI tools. This feature provides people with greater context about images they find online and is available in Google Images, Lens, and Circle for search.
- Ads: We are starting to add C2PA metadata to our ad systems. Over time, we plan to expand this and use C2PA signals to help guide our policy enforcement.
We are also looking at ways to share C2PA information with viewers on YouTube when content is recorded with a camera. We will provide more updates about this later in the year.
We will ensure that our implementations validate content. We will ensure our system checks content against the upcoming C2PA trust list, which helps platforms confirm its origin. For example, if the data indicates an image was captured with a specific camera model, the trust list can help verify that information. Implementing content provenance technology today, we’ll continue to bring it to more products over time.
Working With Others In The Industry
Setting up and displaying content provenance remains a difficult challenge, and it varies by product or service. There is no single solution for all online content, so working with others in the industry is key to developing lasting and compatible solutions. That’s why we encourage hardware providers to consider using C2PA’s content credentials.
Our work with the C2PA supports our wider efforts to be transparent and develop AI responsibly. For example, we are expanding the SynthID, a watermarking tool from Google DeepMind to more generative AI tools and different types of media. We have also joined other groups focused on AI safety and research. We introduced a secure AI framework (SAIF) and coalition. We are also making progress on the voluntary commitments made at the White House last year.
Source: How we’re increasing transparency for gen AI content with the C2PA










