Science, asked by shikhaagarwal2p9xuso, 1 year ago

draw an advertisement of a robot that can understand human languages

Answers

Answered by asitankanp9zkx7
4
With the help of neural networks---vast networks of machines that mimic the web of neurons in the human brain---Facebook can recognize your face. Google can recognize the words you bark into an Android phone. And Microsoft can translate your speech into another language. Now, the task is to teach online services to understand natural language, to grasp not just the meaning of words, but entire sentences and even paragraphs.

At Facebook, artificial intelligence researchers recently demonstrated a system that can read a summary of The Lord of The Rings, then answer questions about the books. Using a neural networking algorithm called Word2Vec, Google is teaching its machines to better understand the relationship between words posted across the Internet---a way of boosting Google Now, a digital assistant that seeks to instantly serve up the information you need at any given moment. Yann LeCun, who oversees Facebook's AI work, calls natural language processing "the next frontier."

Working toward this same end, the AI startup MetaMind has published new research detailing a neural networking system that uses a kind of artificial short-term memory to answer a wide range of questions about a piece of natural language. According to MetaMind, the system can answer everything from very specific queries about what the text describes to more general questions like "What's the sentiment of the text?" or "What's the French translation?" The research, due to appear Wednesday at Arxiv.org, a popular online repository for academic papers, echoes similar research from Facebook and Google, but it takes this work at step further.

"This is a very hot topic, on which the authors of this paper approach or pass the state-of-the-art results on several benchmarks," says Yoshua Bengio, a professor of computer science at the University of Montreal who specializes in artificial intelligence and has reviewed the MetaMind paper. "Their architecture is also interesting in that it is aiming at something potentially very ambitious, trying to sequentially parse a large amount of facts---hopefully one day the whole of Wikipedia and more---in such a way, via a learned semantic representation, that one can answer questions about them."

Similar questions