Category Archives: AI

Of Mice and Memory Chips

Israeli scientists imprint multiple, persistent memories on a culture of neurons, paving the way to cyborg-type machines….Researchers at Tel Aviv University in Israel have demonstrated that neurons cultured outside the brain can be imprinted with multiple rudimentary memories that persist for days without interfering with or wiping out others…The bottom line, the authors wrote: “these findings hint chemical signaling mechanisms might play a crucial role in memory and learning in task-performing in vivo networks.”

Read this item from Scientific American

Leave a comment

Filed under AI, Anthropotropism

Pick me you autonomous decision-making robot

Scientists have expressed concern about the use of autonomous decision-making robots, particularly for military use….Autonomous robots are able to make decisions without human intervention. At a simple level, these can include robot vacuum cleaners that “decide” for themselves when to move from room to room or to head back to a base station to recharge.

Read this item from the BBC and all things “Robot” from Whats New Media.

Leave a comment

Filed under AI, Anthropotropism

Jeff Hawkins’ On Intelligence

The question of intelligence is the last great terrestrial frontier of science. Most big scientific questions involve the very small, the very large, or events that occurred billions of years ago. But everyone has a brain. You are your brain. If you want to understand why you feel the way you do, how you perceive the world, why you make mistakes, how you are able to be creative, why music and art are inspiring, indeed what it is to be human, then you need to understand the brain. In addition, a successful theory of intelligence and brain function will have large societal benefits, and not just in helping us cure brain-related diseases. We will be able to build genuinely intelligent machines, although they won’t be anything like the robots of popular fiction and computer science fantasy. Rather, intelligent machines will arise from a new set of principles about the nature of intelligence. As such, they will help us accelerate our knowledge of the world, help us explore the universe, and make the world safer. And along the way, a large industry will be created.

Excerpt from Jeff Hawkins’ book On Intelligence from OnIntelligence.org. Commentary from the Read/Write Web.

If you have knowledge of Jeff Hawkins or his book On Intelligence, please consider contributing it to the Whats New Media Wiki.

Leave a comment

Filed under AI, People, The Reading Room

Building computers that learn like babies

Hawkins has created an artificial intelligence program that he believes is the first software truly based on the principles of the human brain. Like your brain, the software is born knowing nothing. And like your brain, it learns from what it senses, builds a model of the world, and then makes predictions based on that model. The result, Hawkins says, is a thinking machine that will solve problems that humans find trivial but that have long confounded our computers — including, say, sight and robot locomotion. Hawkins believes that his program, combined with the ever-faster computational power of digital processors, will also be able to solve massively complex problems by treating them just as an infant’s brain treats the world: as a stream of new sensory data to interpret.

Read The Thinking Machine from WIRED. Previously from WNM: Robot evolution models show kinship helps communication and Robot Armies and Thaler’s Creativity Machine

Leave a comment

Filed under AI, Anthropotropism, Technology, our Mirror

Robot evolution models show kinship helps communication

Robots that artificially evolve ways to communicate with one another have been demonstrated by Swiss researchers. The experiments suggest that simulated evolution could be a useful tool for those designing of swarms of robots….Cooperative communication evolved when selective success was judged at the group level – when many robots displayed efficient behaviour – or when the genomes of the robots were most similar – like biological relatives.

Read this article from New Scientist. Previously from WNM: Robot Armies and Thaler’s Creativity Machine and Defining Robot Rights

1 Comment

Filed under AI, Anthropotropism, Technology, our Mirror

Bot suggests tasks for human users

Suggestbot, developed by Dan Cosley at Cornell University in Ithaca, New York, and colleagues, could help online communities such as Wikipedia and Slashdot distribute editing tasks. Such organisations rely on members to add and edit content but, as work piles up, it can be hard even for dedicated users to pick out appropriate tasks.

Suggestbot links tasks with people’s interests. It can comb through thousands of Wikipedia articles with a “needs work” tag and compare them to the list of previously edited articles on a user’s profile, looking for similar articles. To test if this could increase productivity, Cosley studied the work of 91 Wikipedia editors, who had collectively requested 3094 tasks. He tested three versions of the Suggestbot algorithm – one compared titles of “needy” articles with those in the editor’s profile, the second paired people with tasks popular with editors with a similar history, and the third looked for links between needy articles and those in an editor’s profile.

Read this article from New Scientist

Leave a comment

Filed under AI, Business 2.0, The Semantic Web, Wiki

Wikipedia promises to power the semantic web

Software that generates a list of reading material tailored to a person’s individual interests has been developed by a PhD student in the US…”Increasingly, a net user who wants to learn more about a subject will read its Wikipedia page,” he adds. “However, for further depth in the subject, there has been no system for advising the user which other [Wikipedia] articles to read, and in which order.”So Wissner-Gross experimented with algorithms that analyse the hypertext link structure of the site. He used these to find the “most important” Wikipedia pages on a particular topic. He also used them to find pages within a particular area, like physics, that also had information about another topic of interest, such as helicopters, say. An algorithm similar to that used by Google was particularly effective, he found. It assesses page popularity by examining the number of other pages that link to it and also the popularity of those pages. Another algorithm, that examines the number of links needed to get from one article to another, also produced good results with shorter lists.

Read this article from New Scientist

Researchers at the Technion-Israel Institute of Technology have found a way to give computers encyclopedic knowledge of the world to help them “think smarter,” making common sense and broad-based connections between topics just as the human mind does….AThe program devised by the Technion researchers helps computers map single words and larger fragments of text to a database of concepts built from the online encyclopedia Wikipedia, which has over one million articles in its English-language version. The Wikipedia-based concepts act as “background knowledge” to help computers figure out the meaning of the text entered into a Web search, for instance.

Read this article from Physorg.com

Contribute to the Whats New Media Wiki articles on Wikipedia and the Semantic Web

Leave a comment

Filed under AI, The Semantic Web, Ubiquity, Wiki