Researchers from the University of California, Berkeley, Lehigh University and the University of Toronto have developed an algorithm to demonstrate how words evolve.
The question is: Why is this happening? How is it happening? And whether there are computational algorithms we can leverage to make predictions about the historical development of word meanings. […] The [algorithm’s] prediction is that a word should connect closely to related meanings in the space available – similar to finding nearest neighbours in semantic space – resulting in a chain that efficiently links novel meanings to the existing meanings of a word.
Source: U TORONTO NEWS
Focusing on making life easier for people with dyslexia or who have difficulty reading, a Japanese company has created a pair of smart glasses that convert words into voice.
Users will look at some text and blink to capture a photo of what’s in front of them, which gets transmitted to a dedicated Raspberry Pi cloud system, analyzed for text, and then converted into a voice that plays through the earpiece. If the system is unable to read those words, a remote worker would be available to troubleshoot. Some of those ideas sound similar to Google Translate, which is already capable of taking a photo and converting it into voice. But to use the app, you still have to take out your phone and swipe over lines of text, which feels somewhat more unnatural than blinking at the text through your glasses.