I found on a few different things I was working on that operate on files, that testing them was surprisingly complex. I've had a number of failures in my own modules due to things like forgetting to add an empty test file to the MANIFEST; specifying a path like dir/file so tests pass on any machine I have access to but file on windows; or difficulties when I need to a file to a test, so I have to update every test that shares the same directory.
So far this is working great for me, allowing me to get my test coverage up more easily than before. I'm interested to see what others would think of this approach.
Is this something that you'd find generally useful?
When should the test dir get cleaned up? I don't want to leave a lot of temporary directories around, but I hate it when a test fails, then the data I need to investigate it gets removed. Currently, I just remove every known file, them rmdir the directory; that last part will fail if there are any files left.
I'm thinking to add options to force cleanup always, or to only clean if all tests are passing; probably by implementing an import method that could be invoked like:
use Test::Directory (cleanup_if_passing => 1);
Also, there is an is_ok method to check for any inconsistencies. Should I add an option to run that automatically? That would be nice since it would automatically catch some errors, and avoid cases where all the tests pass but the directory isn't cleaned up (per above), but I'm not sure how to show a good context if it fails, to allow finding the cause easily.