Close Menu
  • News
    • Bitcoin
    • Altcoins
    • DeFi
    • Market Cap
  • Blockchain
  • Web 3
    • NFT
    • Metaverse
  • Regulation
  • Analysis
  • Learn
  • Blog
What's Hot

Bitcoin – Derivatives Flash ‘Mixed Signals’, But Is $72K a Real Possibility?

2026-03-07

Ethereum Rising Wedge Warning: A disruption could send the price to $1,500

2026-03-07

Could Jane Street’s $19 Million Bitcoin Sale Create New Liquidation Risks?

2026-03-07
Facebook X (Twitter) Instagram
  • Contact
  • Terms & Conditions
  • Privacy Policy
  • DMCA
  • Advertise
Facebook X (Twitter) Instagram
Bitcoin Platform – Bitcoin | Altcoins | Blockchain | News Stories Updated Daily
  • News
    • Bitcoin
    • Altcoins
    • DeFi
    • Market Cap
  • Blockchain

    A groundbreaking leap into the consumer future of decentralized AI

    2026-03-07

    BitGo to Power SoFiUSD Stablecoin Infrastructure as SoFi Launches First Nationally Chartered Bank Token

    2026-03-07

    AINFT extends multi-chain AI services with BNB chain integration

    2026-03-07

    CMC Markets Begins 24/7 Blockchain Settlements with JP Morgan’s Kinexys

    2026-03-07

    Chainlink helped Visa, ANZ and Fidelity do what banks have been trying to do for years

    2026-03-06
  • Web 3
    • NFT
    • Metaverse
  • Regulation

    US lawmakers consider ban on prediction markets amid bets on Iran

    2026-03-06

    De volatiliteit van Bitcoin zou in april kunnen exploderen als SEC de markt achter de ETF-leverage beoordeelt

    2026-03-06

    Crypto company Kraken secures a direct link to Federal Reserve payments

    2026-03-04

    Bitcoin’s $85 billion derivatives engine may move onshore as CFTC eyes April approval

    2026-03-04

    De deadline voor stablecoins van het Witte Huis verstrijkt terwijl de CLARITY Act vastloopt

    2026-03-03
  • Analysis

    Billionaire Peter Thiel dumps a $74,400,000 stake in three assets, including one of Warren Buffett’s favorites

    2026-03-07

    Bitcoin Price Rally Slows, Consolidation Signals Possible Next Step

    2026-03-07

    XRP Price Ladder Shows What Conditions Are Needed for $18, $100, and $500

    2026-03-07

    Bitcoin’s rally from $73,000 faces a crucial test as momentum looks to change

    2026-03-06

    ‘Good Times Have Arrived’ – Trader Michaël van de Poppe Says the Bitcoin Bear Phase is Over – Here Are His Goals

    2026-03-06
  • Learn

    What Is Wrapped ETH (WETH) and Why Do You Need It in DeFi?

    2026-03-06

    What Is Crypto Protocol and Why Coins Need It

    2026-03-04

    Wat is Liquid Proof-of-Stake: uitgelegd voor beginners

    2026-03-02

    The 9 Most Common Crypto Scam Types

    2026-03-02

    Sidechains Explained: What They Are, How They Work, and Why They Matter

    2026-02-20
  • Blog
Bitcoin Platform – Bitcoin | Altcoins | Blockchain | News Stories Updated Daily
Home»NFT»AI systems “have to be biased”, just not in the way we think
AI systems "have to be biased", just not in the way we think
NFT

AI systems “have to be biased”, just not in the way we think

2023-07-25No Comments5 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email

When I recently asked ChatGPT for a joke about Sicilians, it implied that Sicilians stink.

As someone born and raised in Sicily, I reacted with disgust to ChatGPT’s joke. But at the same time, my computer scientist’s brain began to revolve around a seemingly simple question: Can ChatGPT and other artificial intelligence systems be biased?

Credit: Emilio Ferrara, CC BY-ND

You might say, “Of course not!” And that would be a reasonable response. But there are some researchers, like me, who argue the opposite: AI systems like ChatGPT should indeed be biased, but not in the way you might think.

Removing biases from AI is a laudable goal, but blindly eliminating biases can have unintended consequences. Instead, bias in AI can be controlled to achieve a higher goal: fairness.

Exposing bias in AI

As AI becomes increasingly integrated into everyday technology, many people agree that tackling bias in AI is an important issue. But what does “AI bias” actually mean?

Computer scientists say an AI model is biased if it produces unexpectedly skewed results. These results may be biased against individuals or groups, or otherwise inconsistent with positive human values ​​such as fairness and truth. Even small deviations from expected behavior can have a “butterfly effect,” where seemingly small biases can be reinforced by generative AI and have far-reaching consequences.

Bias in generative AI systems can come from a variety of sources. Problematic training data may associate certain occupations with specific genders or perpetuate racial biases. Learning algorithms themselves can be biased and then reinforce existing biases in the data.

🚨Generative AI has a serious problem with bias🚨

After months of reporting, @dinabass and I looked at thousands of images of it @StableDiffusion and found that text-to-image AI takes gender and racial stereotypes to the extreme than in the real world.
🧵 1/13 pic.twitter.com/p17YxabZU8

— Leonardo (@Leonardonclt) June 9, 2023

But systems can also be biased by design. For example, a company may design its generative AI system to make formal writing more important than creative writing, or to specifically serve government-owned companies, inadvertently reinforcing existing biases and excluding differing views. Other societal factors, such as a lack of regulation or misaligned financial incentives, can also lead to AI biases.

See also  Ka-band Satcom Terminals Market 12.60% CAGR Innovations by Viasat General Dynamics Mission Systems ND SatCom IDS Ingegneria Dei Sistemi SpA PathFinder Digital Thales Group L3Harris Newtec

The challenges of removing bias

It is not clear whether biases can or should be completely removed from AI systems.

Imagine you’re an AI engineer and you notice that your model produces a stereotypical response, such as Sicilians “stink”. You might think the solution is to remove some bad examples from the training data, perhaps jokes about the smell of Sicilian food. Recent research has shown how to perform this kind of “AI neurosurgery” to de-emphasize associations between certain concepts.

But these well-intentioned changes can have unpredictable and potentially negative effects. Even small variations in the training data or in an AI model configuration can lead to significantly different system outcomes, and these changes are impossible to predict in advance. You don’t know what other associations your AI system has learned as a result of “unlearning” the bias you just dealt with.

Other efforts to reduce bias run similar risks. An AI system trained to completely avoid certain sensitive topics may produce incomplete or misleading answers. Misguided regulations can exacerbate AI bias and security issues rather than improve them. Bad actors can bypass protections to provoke malicious AI behavior, making phishing scams more convincing or using deepfakes to rig elections.

With these challenges in mind, researchers are working to improve data sampling techniques and algorithmic fairness, especially in environments where certain sensitive data is not available. Some companies, such as OpenAI, have opted to have human collaborators annotate the data.

On the one hand, these strategies can help the model to better reflect human values. However, implementing any of these approaches also puts developers at risk of introducing new cultural, ideological, or political biases.

See also  Global Audio Cable Market Projected to Reach 7.6% CAGR Growth Led by SoundLink Innovations, ConnectPro Systems, AudioTech Dynamics, VeloCable Industries, and SpectraWire Technologies

Control prejudices

There is a trade-off between reducing bias and ensuring that the AI ​​system is still useful and accurate. Some researchers, including myself, believe generative AI systems should be biased, but in a carefully controlled way.

For example, my collaborators and I have developed techniques that allow users to specify the level of bias an AI system should tolerate. This model can detect toxicity in written text by taking into account in-group or cultural linguistic norms. While traditional approaches may inaccurately label some posts or comments written in African American English as offensive and by LGBTQ+ communities as toxic, this “controllable” AI model provides a much fairer classification.

Controllable – and safe – generative AI is important to ensure that AI models produce output that aligns with human values, while still allowing for nuance and flexibility.

Towards honesty

Even if researchers could achieve bias-free generative AI, it would only be a step toward the broader goal of fairness. The pursuit of fairness in generative AI requires a holistic approach – not just better data processing, annotation and debiasing algorithms, but also human collaboration between developers, users and affected communities.

As AI technology continues to expand, it’s important to remember that removing bias isn’t a one-time solution. Rather, it is an ongoing process that requires constant monitoring, refinement and adjustment. While developers may not be able to easily anticipate or control the butterfly effect, they can remain vigilant and thoughtful in their approach to AI bias.


This article is republished from The Conversation under a Creative Commons license. Read the original article written by Emilio Ferrara, professor of computer science and communication, university of southern california.



Source link

See also  IMF says tax systems need to be updated for crypto assets, decentralized exchanges are part of the problem
biased systems
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

Former Ontario Ministry of Health Chief Architect Hugo Raposo Discusses How AI Is Transforming Canada’s Healthcare Systems

2026-03-07

SuperRare unveils Liquid Editions

2026-03-06

Projected market size of automotive display systems to reach $29.7 billion by 2031, with rising demand for advanced in-vehicle infotainment – analysis by transparent market research

2026-03-03

Diamond Slurry Market to Reach USD 0.13 Billion at 7.1% CAGR Through 2033 | Key Players: NanoGrit Abrasive Solutions, PolyDiamond Slurry Technologies, PrecisionLap Industries, CrystalFlow Abrasives, UltraFinish Diamond Systems

2026-03-01
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Matt Hougan reveals a massive ‘basic target’ course for Bitcoin towards the end of the year despite potential political threats

2025-05-07

JP Morgan believes SEC will be forced to approve ETFs after losing the case against Grayscale

2023-09-03

Solana (SOL) loses its position and prepares the way for another dive

2026-01-20
Editors Picks

Solana (SOL) is facing a bearish setup – the price could fall again if USD 175 breaks

2025-10-21

Gold fatigue sets in – Is it finally Bitcoin’s turn to shine?

2025-10-20

What happens at each milestone?

2026-02-10

Polygon and Suku release exclusive NFT to celebrate the launch of Polygon 2.0

2023-07-20

Our mission is to develop a community of people who try to make financially sound decisions. The website strives to educate individuals in making wise choices about Cryptocurrencies, Defi, NFT, Metaverse and more.

We're social. Connect with us:

Facebook X (Twitter) Instagram Pinterest YouTube
Top Insights

Bitcoin – Derivatives Flash ‘Mixed Signals’, But Is $72K a Real Possibility?

Ethereum Rising Wedge Warning: A disruption could send the price to $1,500

Could Jane Street’s $19 Million Bitcoin Sale Create New Liquidation Risks?

Get Informed

Subscribe to Updates

Get the latest news and Update from Bitcoin Platform about Crypto, Metaverse, NFT and more.

  • Contact
  • Terms & Conditions
  • Privacy Policy
  • DMCA
  • Advertise
© 2026 Bitcoinplatform.com - All rights reserved.

Type above and press Enter to search. Press Esc to cancel.