Apple introduces new accessibility features for iOS and iPadOS

Apple has released a number of new accessibility features for iOS and iPadOS. With this update, the Cupertino giant is doubling down on its commitment to “making technology accessible to everyone.” The new accessibility enhancements, including cognitive accessibility, Live Speech and more, will take advantage of the devices’ built-in hardware and machine learning capabilities. Read below to learn more.

New accessibility features for iPhone and iPad

One of the recently announced features is Assistive Access for Cognitive Disability. As the name suggests, users with complications like autism, Alzheimer’s, etc. usually find it difficult to navigate through items on their smartphone. This feature cleans up apps and OS elements, leaving only core functionality.

This feature works with phone, FaceTime, camera and other daily driver apps. When this feature is enabled, users can interact with enlarged text and high-contrast buttons and only the features that matter. This is based on the data and feedback collected from trusted supporters and people with cognitive disabilities.

Supporting accessibility feature

Another new feature is Live Speech and Personal Voice, which aims to make the technology accessible to people with voice disabilities such as ALS. With the Live Speech feature, users can type their words and then have their iPhone or iPad speak them out loud. This feature will be useful in both face-to-face and virtual conversations.

On the other hand, the Personal Voice feature is a notch above the Live Speech feature. It allows users with early voice impairments to record an audio cast of their own voice. That means whenever you want to communicate with someone and type your answer, it will be played in your own voice and not in a generic robot voice. The functionality uses on-device machine learning and a set of predefined prompts to create your voiceprint in a secure manner.

Live speech and personal voice

For users who are visually impaired or losing their ability to see, Apple has introduced an accessibility feature called Point and Speak for the recognition mode in Magnifier. This feature allows users to receive audio prompts for items with texts. For example, when you type on your keyboard, each keystroke is associated with a voice prompt. Apple is able to achieve this feat by combining the device’s on-device LiDAR sensor, camera app, and machine learning capabilities. The magnifying glass feature allows users to receive voice prompts for any text item the camera app selects.

In the magnifying glass, point and speak for recognition mode

Other announcements include Made for iPhone hearing aids for users with hearing impairments, voice control guides that allow users to learn tips and tricks about voice control features, and more. Apple will begin rolling out these accessibility features to iPhone and iPad users by the end of this year. So what do you think of these new accessibility improvements? Do you think these features will be of any use to you? Comment your thoughts below.