Return to site

AI's deepfake problem: how can blockchains help?

July 25, 2024

The internet's evolution offers a fascinating story of increasing user empowerment. Web1 was a corporate-controlled, read-only experience; Web2 revolutionised this with user-generated content on social media and blogs, fostering participation and a more democratic web. More recently, Web3 has built on this by giving users ownership and control over their data and creations through blockchain technology, building on-line digital and global communities. This decentralisation has further democratised the internet by reducing reliance on large corporations. Furthermore, Web3, which comprises many technologies such as blockchain, virtual reality (VR), artificial intelligence (AI), internet of things, cloud, machine learning, quantum computing, etc., is a metric of the growth of the internet. However, the very growth of the internet presents new challenges - whilst AI has undeniably created more tools for content creation and expression, these tools can also be misused to create deepfakes, which Web3 certainly needs to address to maintain a truly democratic online space. Deepfakes in US fraud surged 1,200% in the first quarter of 2023, and Havard experts postulate that, in the future, over 90% of online content will be AI-generated. The proliferation of deepfakes, according to IBM, (whether audio or visual) is one of the most pressing AI issues - one that blockchain technology is potentially posed to tackle.

There are two ways whereby blockchains can be introduced to AI - before and after an AI system/bot is trained. Deepfakes - AI-generated videos or audio that realistically recreate people’s words or likeness - pose a growing threat and, as deep learning technology becomes more sophisticated, it is increasingly difficult to distinguish real from fake. And, as the University of New South Wales in Australia has highlighted: “…deepfakes are creating havoc across the globe, spreading fake news and pornography, being used to steal identities, exploiting celebrities, scamming ordinary people and even influencing elections.” Moreover, traditional methods such as fact-checking websites can be overwhelmed by the sheer volume of content, and relying on a person's reputation online is certainly not fool proof. So, this is where blockchains can assist by providing a secure digital ledger system that could revolutionise content verification by creating an immutable record of the original content, therefore making it much harder to create a convincing deepfake. In a 2021 study by Rashid et al., the importance of ethical AI development to ensure future technology benefits everyone is emphasised. Their research proposes a comprehensive approach that combines state-of-the-art hashing methods (for content verification), integrity measures (to ensure data has not been tampered with), robust security measures and globally adopted blockchains. This combination empowers users to determine if content originates from a trusted source - in simpler terms, the study highlights the need for transparency in AI development. Using a combination of advanced techniques and blockchain technology, it proposes a system that allows users to verify the origin and authenticity of content generated by AI systems. This focus on traceability is crucial for building trust in AI and ensuring it is used appropriately.

Before an AI system/bot is trained, blockchains can tackle deepfakes in the same way it enables almost any asset that has been tokenised to be traced to its past and current owners. However, one of the biggest challenges in AI development is the lack of transparency surrounding training data. AI models are often trained on massive datasets, but the specific content and origin of that data can be biased - and this "black box" approach raises concerns about potential biases and unintended consequences embedded within the model. So, this is where blockchain technology becomes involved, offering a potential solution for creating a more transparent and accountable AI development process:

· pinpointing bias and corruption

The records held on a blockchain becomes a crucial investigative tool if an AI model exhibits biased outputs. By analysing the ledger, operators can pinpoint exactly when and which dataset might have introduced the bias - this allows for targeted adjustments and helps prevent similar issues in the future.

· unalterable training history

Blockchain is an unchangeable record of the entire training process. Any modifications, updates or deviations made to the data or training process become readily apparent on the ledger. This immutability fosters trust and ensures the model's development can be accurately tracked and analysed.

· enhanced auditing

For critical applications demanding high levels of accountability and transparency, such as healthcare or finance, the detailed documentation provided by blockchain is invaluable. Auditors can use the blockchain record to thoroughly assess the training data and training process, so ensuring the AI model operates within ethical and responsible guidelines.

Blockchains can help ensure responsible development and build trust in AI technology by providing a clear view of an AI model's training journey. This transparency helps mitigate potential biases and fosters public trust in the technology, paving the way for a more responsible and ethical future of AI development.

 

Source: Alethea

Blockchain startup Alethea AI, having partnered with Oasis Labs, is creating a system for labelling and verifying synthetic media (AI-generated content) on its platform. This system aims to empower creators and individuals whose likenesses are used, by requiring legal permissions and consent for creating and monetising “AI” media. Akin to Twitter's blue checkmarks, Alethea believes blockchain validation will help distinguish legitimate content from misleading material. In addition, it emphasises user control over its data using the Oasis Parcel API, allowing it to determine access and monetisation.

 

Source: Attestiv

And, Alethea is not alone; Attestiv uses AI and blockchain technology to create unforgeable digital fingerprints for digital files. Similar to a digital notary, it stores these fingerprints publicly, allowing anyone to verify a file's authenticity. This technology can be deployed to fight fraud (e.g. in insurance) and could be used in social media to warn users about altered content. Notably, Attestiv is currently being used to tackle $300 billion in insurance fraud by verifying photos and videos in claims. However, although concerns around data security, bias and misuse hold back the full potential of AI, scalability has proven to be the biggest problem and this is where enterprise blockchain technology offers a promising solution. Unlike public blockchains, which are open to everyone, enterprise blockchains provide a more controlled environment - and this is crucial since some AI applications might involve highly sensitive data, such as medical records or financial information. Public blockchains, where everyone can see the data, might not be suitable for such scenario whereas enterprise blockchains offer restricted access, ensuring only authorised parties can view the data. Public blockchains can become slow and expensive as the number of users and data transactions increases and for large-scale AI applications involving massive datasets, the processing power of a public blockchain might not be sufficient. Conversely, enterprise blockchains can be customised to handle an organisation's specific needs whereby offering greater scalability.

The intersection of AI and blockchain technology offers promising solutions to the deepfake problem, but it also raises several critical questions. How effectively can blockchain's immutable ledger system combat the proliferation of deepfakes, and will it be sufficient to verify content authenticity on a large scale? And, whilst blockchain can enhance transparency in AI development by documenting training data, can it address the inherent biases within these datasets? Moreover, as blockchain creates a more transparent record of AI decisions, will this transparency be enough to alleviate public scepticism about AI, or will new forms of distrust emerge? And how will the balance between data privacy and the need for AI-driven insights be managed in a decentralised framework? These questions undoubtedly highlight the complexities and potential of integrating blockchain with AI and so the answers will determine whether this technological synergy is able to foster a more ethical, secure and trustworthy digital future.

 

This article first appeared in Digital Bytes (23rd of July, 2024), a weekly newsletter by Jonny Fry of Team Blockchain.