So far, OpenAi has unveiled its most autonomous AI tool: a version of chatgpt that can browse the internet, perform apps and complete Real-World tasks with little to no human input. But with the jump in the possibility comes a grim warning: the technology can also invite a new wave of security threats.
Chatgpt Agent was launched on Thursday and enables users to delegate complex tasks, such as planning holidays, booking hotel rooms, investigating competitors, generating slide -decks and placing online orders.
The position is starting to roll out today to Pro, Plus, and team users.
To complete tasks, the agent uses a virtual computer and a uniform set of tools, including a text -based browser, terminal and access to third -party apps such as Google Drive and Github. The virtual computer is a simulated computer environment that runs in the cloud that the chatgpt agent can operate independently – sort from giving the AI its own private, sandboxed machine to really do work.
“I think this is a new level of capacity in AI,” said Sam Altman, CEO of OpenAi, during a live stream demonstration performed by members of the team who built the product. The live stream was also remarkable, but in a part of the amount of “copper to” warnings that OpenAi gave.
“It is a new way to use AI, but there will be a new set of attacks that belong,” Altman said. “Society and technology will have to evolve and learn how to reduce things that we cannot really imagine, because people are starting to do more and more work.”
An example: an agent can investigate a purchase, find the item on a phishing site and provide a user’s credit card information. To reduce that problem, the current release has a number of security protectors that, for example, stop uploading credit card information until the user approves it manually.
“We trained the model to ignore suspicious instructions on risky websites,” said OpenAI -researcher Casey Chu. “We also have monitors who look at and stop the behavior of the agent if something seems suspicious.”
CHU added that although system protectors can be updated in real time, chatgpt agent is still an “advanced product” that opens the door to new forms of exploitation.
“It is important for users to understand the risks and to be attentive about the information they share,” he said.
The release of Chatgpt Agent comes at a time when AI developers work to equip virtual assistants with increasingly powerful possibilities. On Wednesday, Google launched a new AI-driven position in Google Search that enables its Gemini Ai to call companies on behalf of users.
“Chatgpt agent is still in the early stages and we use this time to learn from real-world use to improve both the product and our guarantees,” a representative of OpenAi said Decrypt. “The current system card reflects our current approach, but we prepare for what the next is and will continue to share updates while we make the agent better and safer.”
Chatgpt can now work for you using your own computer.
Introduction of chatgpt-agent-a uniform agentic system that combines the action of the external browser, the deep research synthesis and the conversation strengths of chatgpt. pic.twitter.com/7Un2nc6nbq
– OpenAI (@Openai) July 17, 2025
Cyber security experts have also expressed concern about the implications of autonomous agents.
“High concern is justified because the agent has implicit authority to reveal personal identification data during the dialogue,” said Nic Adams, co-founder and CEO of CyberSecurity Firm 0RCUs. “Users must grant granular, revocable scopes such as Target Business, Purpose, Toyable Data Elements and the Former Stemper to the time stamp.”
In terms of best practices, Adams suggested that the agent presents a full transcript for approval after implementation before the information is longer than legally required.
“Silent, general permission would shift liability to the user without meaningful control,” he said. “That is why a confirmation model per task is necessary.”
In addition to the risks to have AI agents made purchases or plans, OpenAI researchers agreed that this level of autonomy introduces new threats, in particular fast injection attacks, where malicious input misleads the AI in leaking data, spreading wrong information or taking unauthorized actions.
To reduce these risks, OpenAI developed the takeover mode, which, as the name suggests, gives users the power to take over the agent and to enter information instead of trusting the agent. In some cases, chatgpt -agent will ask users explicit approval before taking important actions, such as making purchases or access to sensitive data.
“We have built a powerful tool, but users must remain careful,” said Chu.
