Scraping and scrapping…

On the heels of some big data scraper and indexer acquisitions in Canada and elsewhere over the past year (think Radian6 and PostRank, among others), the British Columbia Supreme Court’s recent decision involving Century 21’s efforts to tame Zoocasa’s reproduction of its real estate listing has the Canadian industry buzzing back to its business models to re-evaluate its assumptions regarding the “open” web. Perhaps less dramatically, the decision confirms that website terms of use are enforceable against users that have reasonable notice of those terms, and that can include prohibitions against the wholesale reproduction of content (such as text and pictures) published on the website. In this respect, most web-based search engines and crawlers might take comfort in distinguishing their architecture from Zoocasa’s practices during the period of time covered by the judgement. The decision’s more interesting aspect, however, involves the legitimacy of that crawling in and of itself – at a minimum, it refuses to recognize a general public policy exemption to copyright infringement for such activities, and emphasizes the importance of following robot.txt or similar protocols to enable website proprietors to wall themselves off from crawling or indexing activities. For a recent blog post with our very own Mike Morgan’s views on the decisions, check out this link, and feel free to get in touch with Mike at mmorgan@lwlaw.com if you have any doubts.

Start typing and press Enter to search