Apple this week released a new feature as part of the ongoing public beta of iOS 17: the ability to clone your voice and use it in the iPhone’s native and third-party communication applications.
Personal voice for iPhone uses artificial intelligence (AI) to create a near-exact replica of your voice that can then be stored on the phone, designed to work in tandem with what Apple calls “on-device machine learning” to help ensure user privacy.
Apple first teased these new software features in May, which would specifically target cognitive, visual, hearing, and mobile accessibility, and were expected to release later this year.
The following month, the Cupertino-based tech company first announced iOS17 at WWDC23, its annual developer conference, where it talked in more detail about new features: contact posters, live voicemail, FaceTime audio and video messaging, in-person voice, and more.
What is “Personal Voice?”
Last month, Apple dropped the second iOS 17 public beta, adding Personal Voice to the growing lineup of previously removed features, including, but not limited to, contact posters, live voicemail, and standby mode.
For users at risk of losing their ability to speak, including those diagnosed with ALS or other conditions that increasingly affect a person’s ability to speak, Personal Voice is that bridge, which serves as a speech accessibility feature that uses on-device machine learning and invites users to read a random series of text prompts that allow the device to capture the individual’s voice.
“Accessibility is part of everything we do at Apple,” said Sarah Herrlinger, Apple’s senior director of Global Accessibility Policy and Initiatives, in a press release. “These groundbreaking features are designed with feedback from disability community members every step of the way, to support a diverse set of users and help people connect in new ways.”
On Tuesday, CNET’s Nelson Aguilar shared his experience testing the new Personal Voice feature, which lives under Settings → Accessibility → Live Speech → Voices → Personal Voice.
“You’ll have to read aloud 150 phrases, which differ in length,” he said, pointing out that if you make a mistake while recording, you can just hit the record button to re-record the phrase. Aguilar also added that depending on how fast a person speaks, it could take 20-30 minutes to complete.
“At the end of the day, the most important thing is being able to communicate with friends and family,” said Philip Green, a board member and ALS advocate at the nonprofit Team Gleason, who has experienced significant changes in his voice since he was diagnosed with ALS. diagnosis in 2018. “Being able to tell them you love them, with a voice that sounds like you, makes all the difference in the world – and being able to use your synthesized voice on your iPhone in just 15 minutes is extraordinary. to create.”
The general release of iOS 17 is expected to arrive sometime in September.