Deferred loading of modules with large dependency footprint may improve test script and/or command-line tools load times.
Autoload is used. No performance penalty is imposed, aside from loading the module on the first call to it.
Of course, the drawback is that compile time errors (if any) are delayed until unexpected. Better check that the modules in question actually exist and load correctly.
Say we need some utility functions inside a subroutine or scope, but we'd like to (a) keep them private and (b) keep them unavailable for the rest of the package.
This module would make imports (in fast, any symbol table changes) only available until end of scope.
namespace::clean does similar thing (and is cool!), but one must be careful to avoid erasing needed functions.
Instead of iterating the same subroutine over and over again, this module passes an iteration counter to code under test, thus allowing to time precisely the iterated snippet.
It also allows to prepare complex test data and/or check the validity of results by providing "setup" and "teardown" functions.
1) Sometimes I feel like applying a series Test::More-like checks to user input, dynamically loaded plugin, object instance being passed etc. Unfortunately, Test::Builder makes it impossible without turning the whole application into a test script. Hence this OO interface.
2) Instead of using
ok($condition, $message) as a foundation, this module uses a
refute statement which is an inverted assertion. I.e. if everything is OK, we only need 1 bit of information; if something went wrong, we need more details. Think of Unix programs returning 0 on success and different error codes otherwise.
This way, extending the test arsenal becomes much simpler: a test function may know nothing about the test framework, it ONLY needs to try hard to find a discrepancy in its own arguments (aka pure function). A builder module is available that imports user's test subroutines into the main module.
3) Also supported: subcontracts to group complex checks, functional interface (Test::More compatible), checking that contract is fulfilled to exactly the given extent (useful for testing the test routines themselves).
UPD: Released as Assert::Refute.
[ni:f] stands for Not Even A Framework.
Much like Dancer, it splits an application into a set of handler subroutines associated with URI paths. Unlike Dancer, however, it doesn't export anything (except one tiny auxiliary sub) into the application namespace. Instead, a know-it-all Request object is fed to a handler when serving request, like in object-oriented CGI.pm or Kelp.
The response is expected in the form of unblessed hash reference which is in turn fed to the view object for rendering (Template Toolkit and JSON/JSONP currently supported, also Data::Dumper for debugging). Also the return value may contain some dash-prefixed switches altering the behavior of Neaf itself - awful looking, yet visible and simple way of doing it without going for a more complex structure.
Unlike anything I've seen so far, and much like Perl's own -T switch, it offers no (easy) way to get user inputs without validation, either through regexp, or through a form validator. (Regexp-based one is in stock, Validator::LIVR also supported, more planned).
My not-so-impressive feature list so far:
I mostly wrote it for my own education, and to look at possible ways of amending the hurdles that were plaguing me throughout my last two jobs. Now I'd like to share it, but still in doubt whether CPAN needs another framework.
UPDATE Renamed Text::Escape::Any => Text::Quote::Self - does the latter make more sense?
I would like to present a module that can hide potentially dangerous strings behind a facade with overloaded stringification. Concrete stringification method (as-is, uri-escape, quotemeta, etc) is chosen based on a package variable. Such variable can be localized to a scope, and is honoured by all of my stringifier objects at once.
This way the part of the application that handles data does not need to know about how we're going to present the data. And the presentation part may handle all of its input values as plain strings and not care to quote them properly, as long as preferred stringification method is set.
I still have some questions, mostly on naming:
Text::the right namespace?
Text::Escape::Anya descriptive enough name, and what would be better if not?
safe_text()a rare enough function name to not infringe on users' functions/methods?
I'm willing to release another module to CPAN: Guard::Stat.
It allows to create guard objects and gather overall usage statistics of those: how many are still alive, how many are gone etc.
It was initially created for tracking down callback subroutines usage in an AnyEvent application but can really deal with any kind of activities, like closures or plain objects.
The interface is simple: create guards, incorporate them into the tracked objects, get the stats (see synopsis).
Other features include:
If needed, time statistics can also be gathered through an external class (like Statistics::Descriptive::Sparse). Such class is only expected to provide add_data() method.
If needed, a on_level callback can be provided to do some action whenever running() counter goes above or below certain threshold. (E.g. defer incoming requests if load gets too high).
The module is already used in-house and is more or less tested.
My main concern is the name - is Guard::Stat a good one? Is it clear what the module does? Doesn't it occupy a sweet spot where another future module could fit much better? (Looks like "stat()" system call is not a resource one would build guard for).
If no objections follow, I'll probably release it around July, 10.
This module provides Statistics::Descriptive::Full-compatible interface, however it doesn't keep all the data. Instead, data is divided into logarithmic bins and only bin counts are stored. Thus rough statistical distribution's properties can be calculated without using lots of memory.
It was initially targeted at performance analysis. Knowing that 99% of requests finished in 0.1s +- 0.01s looks more useful than just having an average request time 0.1s +- 1s (standard deviation) which is what I observed not once trying to "analyze" our web applications' performance.
However, a broader usage may exist, e.g. some long-running application may want to keep track of various useful numbers without leaking.
Ideally, this module could also become a short way of saying "I'm not sure why I need statistics, but it's nice to have, and simple." For those who know why they need statistics, there's R.