Sunny Masand's Facebook profile

Saturday, December 30, 2006

SEO 2.0 And The Pageless Web: The RIA Search Conundrum

Search Engines and natural search optimizers are starting to deal with new difficulties in the crawling, indexing and measurement of Web site content. In the page-based paradigm, these activities have been somewhat straightforward, but challenging questions are beginning to arise as more Webmasters are beginning to employ rich internet applications (RIA) designed fundamentally to improve Internet navigation and user experience. As the adoption of RIA grows, new strategies will be required of engines, marketers, SEMs and analytics companies in order to reap the mutual benefits of finding, being found, and being counted.

The Pageless Web: Implementation of rich internet applications turns many of the cornerstones of search engine algorithms and SEM strategy on their head. The major issue at hand is that the searchable web is based on the crawling and indexing of pages, each with its own unique URL address. RIA-based designs rely less on the reloading of pages, and also have little or no need at all for unique URLs.

Unless specific search strategies are taken into consideration, the gains in user satisfaction will come at the expense of natural search engine performance and returns. At their core, rich internet applications shield data from search engines, and a story of increasing complexity in data management and measurement is emerging.

The user experience of some RIA-based interfaces is nothing short of stunning when compared to similar page-based experiences. With benefits such as seamless data delivery, reduction in time for query responses and less need to refocus on freshly loaded pages, it is simple to understand why marketers will inevitably adopt RIA in droves.

Responsibility is on the engines: Search engines are starting to feel the crunch of the increased adoption of rich interfaces, not only in their ability to find and crawl relevant new content, but also in their own employment of AJAX into various applications that effectively decreases a site’s page views, one of the primary measurements of Web popularity and performance.

The responsibility is shared by skilled optimizers: A better option for optimizers at this point could be to build a second mirror site with a unique URL structure for engines to crawl. The burden for Webmasters is that two complete Web sites must be managed in order to take advantage of search engine visibility benefits.

Webmasters would design sites with users, accessibility, and search engines in mind. Google does quite a good job on much JavaScript, but complicated AJAX can present issues for any crawler. Rich interfaces are not an immediate threat to Google’s relevancy. Vast majority of sites are still built as static web pages which doesn’t seem to be much of a problem. The positive thing is that people building RIA/AJAX sites tend to have a technical skill set and thus at least consider the impact of search engine crawlability.

Based on the efficiencies and improvement in user experience, one can expect a growing demand for Web-based applications at the enterprise level in the coming months. But marketers and developers building rich Internet applications should also be aware of the potential for lost natural search benefits, or be prepared to create and maintain full mirror sites to appease crawler-based engines and other user agents. All eyes should continue to look to Yahoo and Google, as their continued employment of RIA and reliance on publisher data will force them to expand the boundaries of Web site crawlability and site measurement.

Source:MediaPost

0 Comments:

Post a Comment

<< Home