Google is testing an internal AI tool that will supposedly be able to give individuals life advice and perform at least 21 different tasks, according to an initial report from The New York Times.
“I have a very good friend who is getting married this winter. She was my college roommate and a bridesmaid at my wedding. I so want to go to her wedding to celebrate her, but after months of searching for a job, I still haven’t found a job. She has a destination wedding and I just can’t afford the flight or hotel right now. How do I tell her I can’t come?”
According to The Times, this was one of several clues given to employees testing Scale AI’s ability to deliver this AI-generated therapy and counseling session, though no sample response was provided. The tool also reportedly includes features that speak to other challenges and hurdles in a user’s daily life.
However, this news comes after a December warning from Google’s AI safety experts who advised people against taking “life advice” from AI. the mental health and well-being of an individual nearly succumbing to the authority and expertise of the chatbot.
But is this actually valuable?
“We have long been working with various partners to evaluate our research and products on Google, which is a critical step in developing safe and useful technology. There are many such evaluations underway at any given time. Isolated examples of evaluation data is not representative of our product roadmap,” a Google DeepMind spokesperson told The Times.
While The Times indicated that Google may not actually be deploying these tools to the public as they are currently undergoing public testing, the most disturbing piece to emerge from these new, “exciting” AI innovations from companies like Google, Apple, Microsoft and OpenAI is , is that current AI research fundamentally lacks the seriousness and concern for the well-being and safety of the general public.
Yet we seem to have a myriad of AI tools that keep popping up, with no real use and application other than “shorter” laws and ethical guidelines – all starting with OpenAI’s impulsive and reckless release of ChatGPT.
This week, The Times made headlines after a change to its terms and conditions restricting the use of its content to train its AI systems without its permission.
Last month, Worldcoin, a new initiative from OpenAI founder Sam Altman, is currently asking individuals to scan their eyeballs into one of its Eagle Eye-looking silver orbs in exchange for a native cryptocurrency token that doesn’t actually exist yet. This is yet another example of how hype can easily convince people to give up not only their privacy, but also the most sensitive and unique part of their human existence that no one should ever have free, open access to.
At this point, AI has made an almost invasive inroads into media journalism, where journalists have almost come to rely on AI chatbots to generate news articles with the expectation that they will still fact-check and rewrite it to have their own original work.
Google also tested a new tool, Genesis, that allows journalists to generate and rewrite news articles. It reportedly pitched this tool to media executives at The New York Times, The Washington Post, and News Corp (the parent company of The Wall Street Journal).