Artificial intelligence (AI) is revolutionizing several industries by increasing data processing and decision-making capabilities beyond human limits. However, as AI systems become more sophisticated, they become increasingly opaque, raising concerns about transparency, trust and fairness.
The “black box” nature typical of most AI systems often causes stakeholders to question the origins and reliability of the AI-generated output. In response, technologies such as Exploreable AI (XAI) have emerged that aim to demystify AI operations, although they often fail to fully clarify their complexity.
As the complexity of AI continues to evolve, so does the need for robust mechanisms to ensure that these systems are not only effective, but also reliable and fair. Introducing blockchain technology, known for its crucial role in improving security and transparency through decentralized administration.
Blockchain has potential not only for securing financial transactions, but also for imbuing AI operations with a layer of verifiability that was previously difficult to achieve. It has the potential to address some of AI’s most persistent challenges, such as data integrity and decision traceability, making it a critical component in the quest for transparent and trustworthy AI systems.
Chris Feng, COO of Chainbase, gave his insights on this topic in an interview with crypto.news. According to Feng, while blockchain integration won’t directly solve every facet of AI transparency, it will improve some crucial areas.
Can blockchain technology actually increase transparency in AI systems?
Blockchain technology does not solve the core problem of explainability of AI models. It is crucial to distinguish between interpretability and transparency. The main reason for the lack of explainability of AI models lies in the black-box nature of deep neural networks. While we understand the inference process, we do not understand the logical meaning of each parameter involved.
How does blockchain technology improve transparency in ways that differ from the improvements in interpretability offered by technologies like IBM’s Exploreable AI (XAI)?
In the context of explainable AI (XAI), various methods, such as uncertainty statistics or analyzing model outputs and gradients, are used to understand its functionality. However, integrating blockchain technology does not change the internal reasoning and training methods of AI models and therefore does not improve their interpretability. Nevertheless, blockchain can improve the transparency of training data, procedures and causal inferences. For example, blockchain technology enables tracking of the data used for model training and integrates community input into decision-making processes. All of these data and procedures can be securely recorded on the blockchain, increasing the transparency of both the construction and inference processes of AI models.
Given the widespread problem of bias in AI algorithms, how effective is blockchain in ensuring data provenance and integrity throughout the AI lifecycle?
Current blockchain methodologies have shown significant potential in securely storing and providing training data for AI models. Using distributed nodes improves confidentiality and security. For example, Bittensor uses a distributed training approach that distributes data across multiple nodes and implements algorithms to prevent cheating between nodes, increasing the resilience of distributed AI model training. Furthermore, protecting user data during inference is of utmost importance. For example, Ritual encrypts data before distributing it to off-chain nodes for inference computations.
You might also like: Artificial intelligence can add a new dimension to crypto crimes, says Elliptic
Are there limitations to this approach?
A notable limitation is the control for model bias arising from training data. In particular, the identification of biases in model predictions related to gender or race due to training data is often neglected. Currently, neither blockchain technologies nor AI model debiasing methods effectively aim to eliminate bias through explainability or debiasing techniques.
Do you think blockchain can improve the transparency of the validation and testing phases of AI models?
Companies like Bittensor, Ritual and Santiment use blockchain technology to connect on-chain smart contracts with off-chain computing capabilities. This integration enables inference across the chain, ensuring transparency across data, models and computing power, increasing overall transparency throughout the process.
What consensus mechanisms do you think are best suited for blockchain networks to validate AI decisions?
Personally, I advocate integrating Proof of Stake (PoS) and Proof of Authority (PoA) mechanisms. Unlike conventional distributed computing, AI training and inference processes require consistent and stable GPU resources over extended periods of time. Therefore, it is imperative to validate the effectiveness and reliability of these nodes. Currently, reliable computing resources are mainly housed in data centers of varying scales, as consumer-grade GPUs may not sufficiently support AI services on the blockchain.
Going forward, what creative approaches or developments in blockchain technology do you expect to be critical in overcoming current AI transparency challenges, and how can they change the landscape of AI trust and accountability reform?
I see several challenges in today’s blockchain-based AI applications, such as addressing the relationship between model debiasing and data and leveraging blockchain technology to detect and mitigate black-box attacks. I am actively exploring ways to encourage the community to conduct experiments on model interpretability and increase the transparency of AI models. Moreover, I think about how blockchain can facilitate the transformation of AI into a real public good. Public goods are defined by transparency, social benefit and serving the public interest. However, current AI technologies often fall between experimental projects and commercial products. By leveraging a blockchain network that incentivizes and distributes value, we can catalyze the democratization, accessibility, and decentralization of AI. This approach could potentially achieve actionable transparency and promote greater reliability of AI systems.
Read more: Binance uses artificial intelligence (AI) to improve web3 education