

An image is worth a thousand words. How is reading a text describing what is on the screen going to be better than just looking at the screen yourself, something you’ll need to do to read the description anyway? Aside from accessibility for the blind, the practicality such a technology is questionable.
The motivation behind this is obviously to facilitate the collection and reporting user profiling data. Accessibility for the blind is only a side effect. Tech companies have been doing it with automated audio transcriptions for years already, now they’re after what you look at on your screen.
Read the whole post. I acknowledged them already and am expressing my doubts over the true motivations that drive Microsoft to force a tech like this upon all their users and express my concerns over the real use they will make of this technology.
Don’t you try to change the meaning of my post just so you can have a cause to white knight over. This isn’t Reddit.