The robot, or crawler, stores lists of URLs that it can index, and downloads the respective documents regularly. If during the document analysis the robot discovers a new link, it adds it to the list. Thus, any document or site that has links pointing to it may be found by the robot, i.e. discovered by Yandex search.
There are several different kinds of robots in Yandex. For example, there is a robot that indexes the rss feed for blog search. Another robot only indexes images. The most important one is the main indexing robot whose task is to search for and index information to maintain the main search database.
There is a fast robot that assists the main one; its task is to index fresh, important up-to-date information promptly. If you see two copies of the same document among the indexed pages of your site, this probably means that, in addition to the main robot, the site was indexed by a fast robot.
To learn how to tell the robots apart in your server logs, read the relevant help topic.