Apple announced that iPhone and iPad users will soon have the ability to hear their devices speak in their own voice

Apple announced that iPhone and iPad users will soon have the ability to hear their devices speak in their own voice. This upcoming feature, called "Personal Voice," will provide randomized text prompts to generate 15 minutes of audio. Additionally, a new tool called "Live Speech" will allow users to type in phrases and save commonly used ones for the device to speak during phone calls, FaceTime calls, or in-person conversations. Apple plans to utilize machine learning, a form of artificial intelligence, to create the voice directly on the device, ensuring data security and privacy. While this feature may initially seem quirky, it is actually part of Apple's commitment to accessibility. The company highlighted the importance of providing technology that can assist individuals with conditions like ALS, where the risk of losing the ability to speak is prevalent. Tim Cook, Apple's CEO, stated, "At Apple, we've always believed that the best technology is technology built for everyone." Philip Green, a board member at the Team Gleason nonprofit whose voice has been significantly affected by ALS, expressed the significance of being able to communicate with loved ones in a voice that sounds like their own. While Siri has familiarized iPhone users with Apple's foray into the AI voice market, this new feature represents a step forward in personalized speech technology. Apple's Personal Voice feature is expected to be available before the end of the year, although an exact release date has not been specified.

Post a Comment

Previous Post Next Post