Apple has released a number of new accessibility features for iOS and iPadOS. The Cupertino giant is stepping up its efforts to make ” technology accessible to everyone ” with this update. New accessibility enhancements such as Cognitive Accessibility and Live Speech leverage the device’s onboard hardware and machine learning capabilities. Continue reading below for more information. 
New iPhone and iPad accessibility features
One recently announced feature is Assisted Access for Cognitive Impairments. As the name suggests, users with co-morbidities such as autism or Alzheimer’s disease usually find it difficult to interact with elements on their smartphones. This feature organizes apps and OS elements to keep only core functionality intact.
This feature works with your phone, FaceTime, camera, and other daily driver apps. Turning on this feature allows users to only interact with important features, such as enlarged text and high-contrast buttons . It is built on data and feedback collected from trusted supporters and people with cognitive disabilities.

Another new feature is Live Speech and Personal Voice , which aims to make technology more accessible to people with voice disorders like ALS. The Live Speech feature allows users to type their own words and have them spoken aloud on their iPhone or iPad. This feature is useful for both in-person and virtual conversations.
On the other hand, the personal voice feature is a step above the live speech feature. This allows users with early-stage speech disorders to record audio imprints of their own voices. This means that when you want to communicate with someone and type your response, it will be played in your own voice instead of a typical robot voice. This feature uses on-device machine learning and a set of predefined prompts to create voice imprints in a safe and secure manner.

For users with low vision or who are losing vision, Apple introduced an accessibility feature called Point and Speak in Magnifier’s detection mode . This feature allows users to receive voice prompts for elements that contain text. For example, if you’re typing on a keyboard, every keystroke has an associated audio prompt. Apple was able to accomplish this feat through a combination of on-device LiDAR sensors, camera apps, and machine learning capabilities. The Magnifier feature allows the camera app to receive voice prompts for selected text elements.

Other announcements include Made for iPhone hearing devices for users with hearing loss and a voice control guide to help users learn tips and tricks about voice control features. Apple plans to begin rolling out these accessibility features to iPhone and iPad users by the end of this year. So what do you think about these new accessibility enhancements? Do you think these features provide any benefit? Comment your thoughts below.





![How to set up a Raspberry Pi web server in 2021 [Guide]](https://i0.wp.com/pcmanabu.com/wp-content/uploads/2019/10/web-server-02-309x198.png?w=1200&resize=1200,0&ssl=1)











































