B.C. Case Adds to the Rules of Robotics
The robot exclusion protocol is a relatively well-known way for website operators to control whether their website is searched, and, if so, by whom. One part of the robot exclusion standard specifies how robots and spiders can identify themselves to a website. (For example, the Google spider/robot is called googlebot.) Another part of the robot exclusion standard specifies how websites can tell robots and spiders not to visit a website (or which parts of a website not to visit).
The Zoocasa robot was somewhat unusual in that it not identify itself and did not observe the robot exclusion protocol. Century 21 argued that this behaviour should militate against a finding of fair dealing. However, the Court found that Zoocasa’s failure to follow the robot exclusion protocol was not relevant to a fair dealing analysis. The Court observed that fair dealing only arises when consent has not been given for the use of a copyrighted work and the robot exclusion standard is simply another way of denying consent. The purpose of a fair dealing analysis is to assess the nature of a dealing with a copyrighted work and not whether consent was given for that dealing. Since Zoocasa’s failure to follow the robot exclusion standard related only to the acquisition of a copyrighted work and not its subsequence uses, the Court did not consider Zoocasa’s failure to comply with the robot exclusion standard relevant to fair dealing.
In conducting its fair dealing analysis the Court also made another observation that may be very significant for Internet search engines. Specifically, the Court observed that:
[Zoocasa’s use was] not a situation of a one-time copy being taken. It is conduct consisting of repeated actions by the defendants. In my view the amount of dealing exceeds what is fair.
Since Internet search engines need to repeatedly access a website to make sure the search engine’s index reflects the then-current content available on that website, this factor (unless modified) will always work against search engine providers. Instead of focusing on the taking of one-time copies versus repeated copies, this factor might be made more balanced by examining whether the number of copies taken is reasonable having regard to the nature of the dealing. In some instances it may be reasonable to limit the use of copyrighted materials to a few copies or perhaps even a single copy, but in the case of an Internet search engine, a restriction to a single copy basically means that material cannot be indexed at all.