Close Menu
  • News
    • Bitcoin
    • Altcoins
    • DeFi
    • Market Cap
  • Blockchain
  • Web 3
    • NFT
    • Metaverse
  • Regulation
  • Analysis
  • Learn
  • Blog
What's Hot

Bitcoin sees historic death cross on three-day chart – what does it mean?

2026-03-07

Bitcoin On-Chain Data Identifies Unusual Market Cap Behavior

2026-03-07

BitGo to Power SoFiUSD Stablecoin Infrastructure as SoFi Launches First Nationally Chartered Bank Token

2026-03-07
Facebook X (Twitter) Instagram
  • Contact
  • Terms & Conditions
  • Privacy Policy
  • DMCA
  • Advertise
Facebook X (Twitter) Instagram
Bitcoin Platform – Bitcoin | Altcoins | Blockchain | News Stories Updated Daily
  • News
    • Bitcoin
    • Altcoins
    • DeFi
    • Market Cap
  • Blockchain

    BitGo to Power SoFiUSD Stablecoin Infrastructure as SoFi Launches First Nationally Chartered Bank Token

    2026-03-07

    AINFT extends multi-chain AI services with BNB chain integration

    2026-03-07

    CMC Markets Begins 24/7 Blockchain Settlements with JP Morgan’s Kinexys

    2026-03-07

    Chainlink helped Visa, ANZ and Fidelity do what banks have been trying to do for years

    2026-03-06

    Nine group partners with Rocket IDO to advance RWA’s cross-chain liquidity, powered by Web3 Launchpad

    2026-03-06
  • Web 3
    • NFT
    • Metaverse
  • Regulation

    US lawmakers consider ban on prediction markets amid bets on Iran

    2026-03-06

    De volatiliteit van Bitcoin zou in april kunnen exploderen als SEC de markt achter de ETF-leverage beoordeelt

    2026-03-06

    Crypto company Kraken secures a direct link to Federal Reserve payments

    2026-03-04

    Bitcoin’s $85 billion derivatives engine may move onshore as CFTC eyes April approval

    2026-03-04

    De deadline voor stablecoins van het Witte Huis verstrijkt terwijl de CLARITY Act vastloopt

    2026-03-03
  • Analysis

    Billionaire Peter Thiel dumps a $74,400,000 stake in three assets, including one of Warren Buffett’s favorites

    2026-03-07

    Bitcoin Price Rally Slows, Consolidation Signals Possible Next Step

    2026-03-07

    XRP Price Ladder Shows What Conditions Are Needed for $18, $100, and $500

    2026-03-07

    Bitcoin’s rally from $73,000 faces a crucial test as momentum looks to change

    2026-03-06

    ‘Good Times Have Arrived’ – Trader Michaël van de Poppe Says the Bitcoin Bear Phase is Over – Here Are His Goals

    2026-03-06
  • Learn

    What Is Wrapped ETH (WETH) and Why Do You Need It in DeFi?

    2026-03-06

    What Is Crypto Protocol and Why Coins Need It

    2026-03-04

    Wat is Liquid Proof-of-Stake: uitgelegd voor beginners

    2026-03-02

    The 9 Most Common Crypto Scam Types

    2026-03-02

    Sidechains Explained: What They Are, How They Work, and Why They Matter

    2026-02-20
  • Blog
Bitcoin Platform – Bitcoin | Altcoins | Blockchain | News Stories Updated Daily
Home»NFT»AI language models are nothing without humans, sociologist explains
AI language models are nothing without humans, sociologist explains
NFT

AI language models are nothing without humans, sociologist explains

2023-08-25No Comments6 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email

The media frenzy surrounding ChatGPT and other artificial intelligence systems with large language models covers a range of themes, from the prosaic – large language models could replace conventional internet searches – to the worrisome – AI will cut many jobs – and the overwrought – AI threatens to become extinct. die-level threat to humanity. All these themes have a common denominator: large language models herald artificial intelligence that will supplant humanity.

But large language models, for all their complexity, are actually very stupid. And despite the name “artificial intelligence”, they are completely dependent on human knowledge and labor. Of course, they cannot reliably generate new knowledge, but there is more to it.

ChatGPT can’t learn, improve or even keep up to date without people feeding it new content and telling it how to interpret that content, not to mention programming the model and building, maintaining and powering the hardware. To understand why, you first need to understand how ChatGPT and similar models work, and what role people play in making them work.

How ChatGPT works

Large language models like ChatGPT work broadly by predicting which characters, words and sentences should follow each other in order based on training data sets. In ChatGPT’s case, the training dataset contains massive amounts of public text plucked from the Internet. ChatGPT works based on statistics, not understanding words.

Imagine I trained a language model on the following set of sentences:

Bears are large, hairy animals. Bears have claws. Bears are secretly robots. Bears have noses. Bears are secretly robots. Bears sometimes eat fish. Bears are secretly robots.

The model would be more inclined to tell me that bears are secretly robots than anything else, because that string of words appears most often in the training dataset. This is clearly a problem for models trained on fallible and inconsistent datasets – and that applies to all models, even academic literature.

See also  What happens in the Ripple vs sec -right case? Pro-XRP lawyer John Deaton explains

People write a lot of different things about quantum physics, Joe Biden, healthy eating, or the January 6 riot, some more valid than others. How is the model supposed to know what to say about something when people say many different things?

The need for feedback

This is where feedback comes into play. If you use ChatGPT you will find that you have the option to rate comments as good or bad. If you rate them as bad, you will be asked to give an example of what a good answer would entail. ChatGPT and other major language models learn which answers, which predicted text strings are good and bad, through feedback from users, the development team, and contractors hired to tag the output.

ChatGPT cannot by itself compare, analyze or evaluate arguments or information. It can only generate strings of text that are similar to those other people have used when comparing, analyzing, or evaluating, giving preference to strings of text that are similar to those that have been said to be good answers in the past.

So when the model gives you a good answer, it uses a great deal of human labor that has already been put into telling you what is and isn’t a good answer. There are a lot of human workers hidden behind the screen, and they will always be needed if the model wants to improve further or expand its content.

A recent study published by journalists in Time magazine found that hundreds of Kenyan workers spent thousands of hours reading and labeling racist, sexist and disturbing texts, including graphic descriptions of sexual violence, from the darkest depths of the internet to learn ChatGPT not to copy such texts. contents. They were paid no more than $2 an hour, and many understandably reported experiencing mental health problems as a result of this work.

See also  Blur's demise casts shadow on NFT market, potential for rebirth ahead

What ChatGPT can’t do

The importance of feedback is evident from ChatGPT’s tendency to “hallucinate”; that is, confidently giving incorrect answers. ChatGPT cannot provide good answers on a topic without training, even though good information on that topic is available all over the internet. You can try this out for yourself by asking ChatGPT about more and less obscure things. I found it particularly effective to ask ChatGPT to recap the plots of several works of fiction, as the model is apparently more rigorously trained on non-fiction than fiction.

During my own testing, ChatGPT summarized the plot of JRR Tolkien’s “The Lord of the Rings”, a very famous novel, with only a few mistakes. But the summaries of Gilbert and Sullivan’s “The Pirates of Penzance” and Ursula K. Le Guin’s “The Left Hand of Darkness” – both slightly more niche, but far from obscure – come close to playing Mad Libs with the character and place names. It doesn’t matter how good the respective Wikipedia pages of these works are. The model needs feedback, not just content.

Because large language models don’t actually understand or evaluate information, they depend on people to do it for them. They parasitize human knowledge and labour. As new resources are added to their training datasets, they need new training on whether and how to build sentences from those resources.

They cannot judge whether news reports are accurate or not. They cannot judge arguments or make trade-offs. They can’t even read an encyclopedia page and only make statements consistent with it, or accurately summarize the plot of a movie. They trust people to do all these things for them.

See also  Yuga Labs' new CEO buys Bored Ape NFT for $90,000

They then paraphrase and remix what people have said, relying on even more people to tell them if they’ve paraphrased and remixed well. If the common wisdom on a particular topic changes—for example, whether salt is bad for your heart or whether early breast cancer screening is beneficial—they will need extensive retraining to incorporate the new consensus.

Many people behind the curtain

In short, large language models are far from the harbinger of fully independent AI, but illustrate the total dependence of many AI systems, not only on their designers and maintainers, but also on their users. So if ChatGPT gives you a good or helpful answer about something, don’t forget to thank the thousands or millions of hidden people who wrote the words it cracked and taught it what good and bad answers were.

ChatGPT is far from an autonomous superintelligence, like all technologies, nothing without us.

This article was republished from The conversation under a Creative Commons license. Read the original article by John P. Nelson Postdoctoral researcher in the ethics and societal implications of artificial intelligence, Georgia Institute of Technology

Source link

explains humans Language models sociologist
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

SuperRare unveils Liquid Editions

2026-03-06

Apollo Crypto Explains Why Hyperliquid is the Largest Altcoin Holding

2026-03-06

Bitcoin Surpasses $73,000, Expert Explains Why Rally Isn’t Over Yet

2026-03-04

XRP Price At $100 Is ‘Inevitable’, Analyst Explains Why This Is So

2026-03-03
Add A Comment

Comments are closed.

Top Posts

Filmmaker David Goyer bets on blockchain for the next sci-fi franchise

2025-05-19

Bitcoin: Stock Markets See $40 Million Daily USDT Surge – Is BTC’s Rally Just Getting Started?

2024-12-20

What Are Liquidity Pools? A Beginner’s Guide

2025-06-05
Editors Picks

Everything about the mainstream NFT collection

2024-07-12

Toncoin’s Adoption Wave – Will It Be Enough to Stop TON’s Downtrend?

2024-10-05

Deep Dive with Nikhil Joshi, COO of EMURGO

2024-06-08

Crypto.com CEO Urges Regulators to Investigate Exchanges with the Most Liquidations

2025-10-12

Our mission is to develop a community of people who try to make financially sound decisions. The website strives to educate individuals in making wise choices about Cryptocurrencies, Defi, NFT, Metaverse and more.

We're social. Connect with us:

Facebook X (Twitter) Instagram Pinterest YouTube
Top Insights

Bitcoin sees historic death cross on three-day chart – what does it mean?

Bitcoin On-Chain Data Identifies Unusual Market Cap Behavior

BitGo to Power SoFiUSD Stablecoin Infrastructure as SoFi Launches First Nationally Chartered Bank Token

Get Informed

Subscribe to Updates

Get the latest news and Update from Bitcoin Platform about Crypto, Metaverse, NFT and more.

  • Contact
  • Terms & Conditions
  • Privacy Policy
  • DMCA
  • Advertise
© 2026 Bitcoinplatform.com - All rights reserved.

Type above and press Enter to search. Press Esc to cancel.