If you follow the templates, you'll want an accumulator for all the file names you've ever seen while looking through the tree. You can stop the entire algorithm if and when you find a duplicate.
As another approach that has some strong similarities, flatten the tree first into a big list, and then look for duplicates in the result. That way, you get to ignore the tree structure.
Which is best?
Arguably, you shouldn't throw away original interesting structure. Ignoring the data's given structure is generally considered a bad thing. Here, you are taking some advantage of the data structure and the algorithm by the fact that you can stop when you find a duplicate.
On the other hand, the structure of the data arguably isn't interesting during this particular computation, since duplication has nothing to do with the placement. "Reducing" the tree-duplicates problem down to the simpler list-duplicates problem is an instance of a time-honored problem-solving strategy.
Again, with the accumulator version, you can stop immediately if you find a duplicate. Thus, you might not have to look at all the tree. With lots of thought and the idea of lazy computation (see COMP 311), the flattening version can do the same thing.
With additional thought, it is possible to implement either of these in O(n log n) time. For the accumulator version, you'll need the accumulator to be not a list, but a more complicated data structure, like a balanced binary tree (see COMP 212). For the flattening version, you'll need to sort the list with an efficient algorithm, like quicksort (see later in COMP 210).
In short, it isn't clear which is better.