Ragahavan started with the premise that people don’t want to search, but rather to get tasks done. Search engines spend very little time servicing you compared to the time you spend doing queries, evaluating results, and so on. This is backwards–why aren’t machines should be working harder than we are. He proposes that the grand challenge is to devise general platforms for semantic searches–that is searches that are able to derive meaning from the search terms presented to them.
Read this item from ZDNet’s Between the Lines
Instead of being a web search engine, spiders and all, iReader is a tool to create synopses of content based on browsing, not searching, and on mousing, not clicking. These distinctions are important, in large part for legal reasons intended to keep the Googlers at bay. Searching pretty much requires scraping the Internet for content that is then indexed, while iReader’s new browsing metaphor doesn’t kick into action until the user mouses over a URL (no clicking required, hence no stepping on the toes of Google or any Google competitors). Only then does a Syntactica server take a quick look at the URL, process it through the same linguistic engine used in ePrecis, then spit out a short synopsis of the content. The fact that this can take place in real time with a lot of people online at any one time is pretty darned impressive.
Doing a search using this system is simply a matter of entering a natural-language query, which is parsed and indexed in exactly the same manner, yielding another vector. This search vector is plotted in the multidimensional space and the search results are those vectors (those articles) that are nearest in space to the query vector. The closer to the query vector an article vector lies, the more likely that article is to answer the question posed in the query….The great problem with obtaining meaning from text is understanding the context in which that text appears, and this is where Syntactica’s lexicon shines. This lexicon is a compilation of a meticulous word-by-word analysis of Webster’s 3rd New International Dictionary, unabridged. This compilation considers the many different meanings and contexts of each individual word in the lexicon, and assigns a set of values to each word, which is a heck of a lot of work and explains why most competing products (there turn out to be a bunch) don’t have it.
Read I, Cringely’s Just the Facts Ma’am. Previously from WNM: The Future for “Search” – What we mean when we say human, vertical and Defining and building a semantic web
The seemingly exponential growth of portable technology has sparked fears that people are becoming addicted or swamped by gadgets and their uses…One of the conclusions reached by experts was that “tech overload” is the price people have to pay for always-on communication, where the line between work and play has become blurred.
Read this article from the BBC. Previously from WNM: Is “Internet addiction,” addiction? and Work induced technology addiction.
3C = Content, Commerce, Community | 4th C = Context | P = Personalization | VS = Vertical Search
This, I submit, is the formula for the future: Web 3.0 = (4C + P + VS).
Sramana Mitra charts a potential web evolution at the Read/Write Web
Given that few predicted how Web 2.0 would come to be defined during the early stages of Web 1.0, the concept of Web 3.0 is still a bit fuzzy, and Web 4.0, the WebOS on Nova’s map, is really hazy. The WebOS implies that machine intelligence has reached a point that the Internet becomes the planetary computer, a massive web of highly intelligent interactions.
More from ZDNet’s Between the Lines
Previously from WNM: Web 3.0: The Common Sense Internet, Looking back, Looking ahead and WebOS: Liberating software from hardware
Contribute to the Whats New Media articles for “Web 3.0” and “Web OS“
The building serves as a testing ground for developing and perfecting wireless sensing technology to connect major chunks of the real world to the Internet. Such networks could monitor the environment for pollutants, gauge whether structures are at risk of collapse or remotely follow medical patients in real time….Once the stuff of science fiction, wireless sensor networking is quickly catching on, attracting the attention of the military, academics and corporations. Just as the Internet virtually connected people with personal computers, the prospect of wireless arrays sprinkled in buildings, farmland, forests and hospitals promise to create unprecedented links between people and physical locations.
Read this article from WIRED
Suggestbot, developed by Dan Cosley at Cornell University in Ithaca, New York, and colleagues, could help online communities such as Wikipedia and Slashdot distribute editing tasks. Such organisations rely on members to add and edit content but, as work piles up, it can be hard even for dedicated users to pick out appropriate tasks.
Suggestbot links tasks with people’s interests. It can comb through thousands of Wikipedia articles with a “needs work” tag and compare them to the list of previously edited articles on a user’s profile, looking for similar articles. To test if this could increase productivity, Cosley studied the work of 91 Wikipedia editors, who had collectively requested 3094 tasks. He tested three versions of the Suggestbot algorithm – one compared titles of “needy” articles with those in the editor’s profile, the second paired people with tasks popular with editors with a similar history, and the third looked for links between needy articles and those in an editor’s profile.
Read this article from New Scientist
TV viewers with sensitive ears may be glad to hear about a [beep] new patent that will help to ensure their [beep] doesn’t get [beep] up. The patent, submitted by Matthew T. Jarman of Salt Lake City, seeks to monitor TV content for questionable words, phrases, or even subjects, and censor them on the fly based on the viewer’s preferences. The device would ideally be able to analyze signals from cable, satellite, network, or broadcast television, as well as streaming video content.The system uses a computer and a Personal Video Recorder (PVR) to monitor the closed captioning text that comes with most television programs. The user would then be able to use a menu—protected by a user ID and password—to select at least one blocking word, and when the computer comes across keywords that the user has chosen to block, the system will mute the broadcast so that viewers won’t have to wash out their ears with soap later. The patent also describes a method by which the PVR would be able to identify multiple meanings of a word (a female dog versus a mean woman, for example) and allow the user to differentiate between the two when deciding what gets blocked.
Read this article from Ars Technica