When mainstream media starts talking about SemanticWeb, one can infer that it is not just another buzz within research labs. Recently the magazine The Economist, and BBC online covered this topic. Early this month Thomson-Reuters announced a service that will help in Semantic Markup.
The term Semantic Web was first used by Sir Tim Berners-Lee, the inventor of World Wide Web, to be “… day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines”. The most significant aspect of semantic web is the ability of machines to understand and derive semantic meaning from the web content. The term Web 3.0, was introduced in 2006 as a next generation web with emphasis on semantic web technologies. Though the exact meaning and functionality in Web 3.0 is vague, most experts agree that we can expect Web 3.0 in some form starting in year 2010.
There are two approaches to extract semantic knowledge from web content. The first involves extensive natural language processing of content, while the second approach places the burden on content publishers to annotate or markup content. This marked-up content can be processed by search engines, browsers or intelligent agents. This solution overcomes the shortcomings of natural language processing which tends to be non-deterministic; furthermore determining the meaning depends not only on the written text, but also on information that is not captured in written text. For instance, an identical statement by Jay Leno or from Secretary Hank Paulson may have a totally different meaning.
The ultimate goal of web 3.0 to provide intelligent agents that can understand web content , is still a few years away. Meanwhile, we can start capturing information and start building constructs in our web pages to facilitate search engines and browsers to extract context and data from content. There are multiple ways of doing semantic markup of web content that is understood by browsers and search engines.
Semantic Search Engines
On Sept 22, 2008 Yahoo announced that it will be extracting rdfa data from web pages. This is a major step in improving the quality of search results. Powerset (recently acquired by Microsoft) is initially allowing semantic searches on content from wikipedia.org, which is a fairly structured content. While Hakia uses a different approach, it processes unstructured web content to gather semantic knowledge. This approach is language based and dependent on grammar.
Semantic markup s- RDFa, and microformats
W3C consortium has authored specifications for annotation using RDF an XML based standard, that formalizes all relationships between entities using triples. A triple is a notation involving a subject, object and a predicate, for example “Paris is the capital of France” the subject being Paris, the predicate is capital, while ‘France’ is the object. RDFa is an extension to XHTML to support semantic markup that allows RDF triples to be extracted from web content.
Microformats are simpler markups using XHTML and HTML tags which can be easily embedded in web content. Many popular sites have already started using microformats. Flickr uses geo for tagging photo locations, hCard and XFN for user profile. LinkedIn uses hcard, hResume and XFN on user contacts.
Microformat hCard example in html and resulting output on browser page.
Atul Kedar Atul Kedar
Avenue a Avenue A | Razorfish
1440 BroadwayNew York,, NY USA
|Atul Kedar Avenue A | Razorfish1440 BroadwayNew York, NY USA|
Automated Semantic markup services and tools
Another interesting development is in the area of automatic entity extraction from content, these annotation application or web services are being developed. Thomson Reuters is now offering a professional service OpenCalais to annotate content. PowerSet is working on towards similar offerings. These service reduces the need for content authors to painfully go thru the content and manually tag all relationships. Unfortunately, these services are not perfect and need manual crosschecking and edits. Other similar annotation services or tools are Zementa, SemanticHacker and Textwise.
As Web 3.0 starts to take shape, it will initially affect the front end designers involved with the web presentation layer, as organizations demand more semantic markup within the content. In due course , CMS architects will have to update design of data entry forms, design of entity information records in a manner that facilitates semantic markup and removes any duplication of entity data or entity relationships. Entity data such as author information, people information, addresses, event details, location data, and media licensing details are perfect candidates for new granular storage schemes and data entry forms.