Sign in to PrePAN

PrePAN provides a place
to discuss your modules.


Requests for Reviews Feed

App::H2N A public copy of the h2n scripts used in the book "DNS and Bind"

The code is very helpful for anyone running the 'bind' DNS server. The '/etc/hosts' file format is used for the host database, and 'h2n' and other programs are used to generate required 'bind' files and to provide various information about the system. Due to the many options available with 'h2n', the user will quickly find one or more configuration files may be used to simplify capturing oft-used options and documenting one or more uses of the program. It is recommended that both the input database files and configuration files be kept under version control.

The package organization is still in work to follow CPAN standards, and some tests will be added before the collection is uploaded.

The output from running 'h2n --help' is shown below.

Usage:  h2n [zone creation options] | -V [zone verification options]

The zone creation options are:
  -A Don't create name server data for aliases in the host table
  -a NET[:SUBNETMASK|/CIDRsize [mode=S]] [NET ...]
     Add hostname data on NET to -d DOMAIN but without PTR data
     mode=S  Allow /8-24 network to be a supernet to smaller-classed nets
     Set absolute directory path where boot/conf files will be written
     Use BOOTFILE instead of the default: ./named.boot (BIND 4)
     Create RRs using special host file comments as keys into COMMENT-FILE
     Prepend contents of PRE-CONFFILE to the BIND 8/9 conf file (+c option)
  -c REMOTE-DOMAIN [mode=[A][I][D[Q]][HS]] [REMOTE-DOMAIN [mode=...]
     Add CNAMEs which point to REMOTE-DOMAIN
     mode=A  Create additional CNAMEs for aliases in REMOTE-DOMAIN
         =I  REMOTE-DOMAIN is an intra-zone subdomain of -d DOMAIN
         =D  Defer CNAMEs; name conflicts prefer -d DOMAIN over REMOTE-DOMAIN
         =Q  Don't report name conflicts that prevent deferred CNAME creation
         =H  enable -hide-dangling-cnames REMOTE-DOMAIN option
         =S  enable -show-dangling-cnames REMOTE-DOMAIN option
  +c [CONFFILE] [mode=S|M]
     Use CONFFILE instead of the default: ./named.conf (BIND 8/9)
     mode=S  Create CONFFILE with zone entries in single-line format (default)
         =M  Create CONFFILE with zone entries in multi-line format
  -D [FILE]
     Create delegation information to link in with your parent zones
  -d DOMAIN [db=FILE1] [spcl=FILE2] [mode=D|Q]
     Create zone data file for DOMAIN
     db=FILE1    Override default filename of db.LABEL, e.g.,
     spcl=FILE2  Override default filename of spcl.LABEL for existing RRs
     mode=D      Set default domain of unqualified hostnames to DOMAIN
         =Q      Silently ignore hostnames that do not match DOMAIN
     Exclude hostfile data with names in EXCLUDED-DOMAIN
  -f FILE
     Read command line options from FILE
     Use HOSTFILE instead of /etc/hosts (read STDIN if HOSTFILE is `-')
  -h HOST
     Set HOST in the MNAME (master name server) field of the SOA record
  -I [ignore|warn|audit|audit-only|warn-strict|fail|strict] [rfc2782]
     Control level and type of various RFC conformance checks
     ignore       Disables checking of domain names and zone data consistency
     warn         Issue warning when hostnames contain illegal characters
     audit        Check zone data for integrity and RFC compliance + `warn'
     audit-only   Check zone data integrity without the `warn' check
     warn-strict  Warn about single-character hostnames + `warn' + `audit'
     fail         Reject hostnames with illegal characters + `audit'
     strict       Reject single-character hostnames + `fail' + `audit'
     rfc2782      Check SRV RRs for `_service._protocol' labels in owner names
  -i NUM
     Set the serial number of all created/updated zone files to NUM
  -L NUM
     Set file handle limit to NUM
     Add a logging specification to the BIND 8/9 config files
  -M [no-mx|smtp|no-smtp]
     Restrict the generation of MX records.  No argument means that MX
     records will not be generated under any circumstances.  Otherwise,
     set the default action which can be overridden on a host-by-host basis.
     no-mx    Do not generate any MX records
     smtp     Only generate the self-pointing MX record
     no-smtp  Only generate the global MX record(s) from -m option(s)
     Include MX record for each host not having [no mx]/[smtp] comment flags
  +m [D|C|P|CP]
     Control RR generation method for multi-homed hosts
     D   Use default behavior (A RRs for all names, CNAMEs for common aliases)
     C   Create A RRs for canonical name and 1st alias, CNAMEs for all others
     P   Create PTR RRs that point to A RR of 1st alias instead of canonical
     CP  Combine `C' and `P' flags
     Apply SUBNETMASK/CIDRsize as default value for subsequent -n/-a options
  -n NET[:SUBNETMASK|/CIDRsize [mode=S] [domain=DOMAIN] [ptr-owner=TEMPLATE]]
        [db=FILE1] [spcl=FILE2]
     Create zone data for each class-A/B/C subnet of NET for network sizes
     /8 to /24.  For /25-32 networks, create zone data to support RFC-2317
     delegations to DOMAIN with the owner names of the PTR records fitting
     the TEMPLATE pattern.
     mode=S      Allow /8-24 network to be a supernet to smaller-classed nets
     db=FILE1    Override default filename of db.NET, e.g., db.192.168.1
     spcl=FILE2  Override default filename of spcl.NET for existing RRs
     Add option specifications to BIND 4 boot files
     Add option specifications to BIND 8/9 conf files
     Set SOA time intervals
     Adds zone-specific options to BIND 8/9 master conf
     Adds zone-specific options to BIND 8/9 slave conf
  -P Preserve upper-case characters of hostnames and aliases in the host table
  -p REMOTE-DOMAIN [mode=A|P] [REMOTE-DOMAIN [mode=...]
     Create only PTR data for REMOTE-DOMAIN hosts
     mode=A  Required flag if REMOTE-DOMAIN's forward-mapping zone built w/ -A
         =P  Enables alternate method of PTR generation as described for +m P
  -q Work quietly
  -r Enable creation of RP (Responsible Person) records
     Adds NS record to zone(s) for the last preceding -d option or -n option(s)
  +S [enable|disable]
     Control class-A/B/C NETs to act as supernets for subsequent -n/-a options
     Adds NS record to zones for -d option and all -n options
  -T [mode=M] [RR='DNS RR' [RR='...']] [ALIAS='name [TTL]' [ALIAS='...']]
     Add additional top-of-zone-related records to DOMAIN of the -d option
     mode=M  Add the global MX record(s) specified in the -m option
     RR=     Add 'DNS RR' with owner field set to whitespace or to `@'
     ALIAS=  Add CNAME RR with owner field of 'name' & RDATA field set to `@'
  -t [O|P]
     Generate TXT records from host table comment fields excluding h2n flags
     O   Only generate a TXT record if an explicitly quoted string is present
     P   Prefer explicitly quoted text but otherwise act in the default manner
     Create $TTL directives & SOA Negative Cache TTL
     Set CONTACT as the mail addr. in the SOA RNAME (responsible person) field
  -v Display the version number of h2n
  -W PATH [mode=O]
     Set absolute directory path where `spcl'/zone files will be read/written
     mode=O  Set old (pre-v2.60) behavior where PATH only appears in boot/conf
             `directory' statements and `spcl' $INCLUDE directives.
  -w Generate WKS records for SMTP/TCP for every MX RRset
  -X Generate only the BIND conf/boot file(s) and exit
  -y [mode=[D|M]
     Set SOA serial numbers to use date/version format
     mode=D  Set day format of YYYYMMDDvv allowing 100 versions/day (default)
         =M  Set month format of YYYYMMvvvv allowing 10,000 versions/month
     Specify ADDRESS of primary from which to load unsaved zone data
     Specify ADDRESS of primary from which to load saved zone data
  -show-single-ns [-hide-single-ns]
     Report subdomain delegations that only have a single name server if
     auditing is in effect (default)
  -show-dangling-cnames [-hide-dangling-cnames] [REMOTE-DOMAIN [REMOTE-DOMAIN]]
     Report CNAMEs that point to non-existent external domain names or
     domain names with no RRs if auditing is in effect (default)
  -show-chained-cnames [-hide-chained-cnames]
     Display each out-of-zone chained CNAME if auditing (default is -hide)
  -query-external-domains [-no-query-external-domains]
     Make DNS queries for domain names in zones external to -d DOMAIN (default)
  -debug[:directory] [-no-debug]
     Prevent removal of temp files in /tmp or [directory] (default is -no)
  -glue-level [LEVEL]
     Specify/display the number (0-30) of chained inter-subzone delegations
     that are permitted before optional parent-zone glue RRs become mandatory
     if auditing is in effect.  Default LEVEL is 1.

The zone verification options are:
  -f FILE
     Read command line options from FILE
  -v Display the version number of h2n
  -I [audit|audit-only]
     Control level and type of various RFC conformance checks
     audit       Check zone data integrity & report names with illegal chars.
     audit-only  Check zone data integrity & ignore names with illegal chars.
     Verify the integrity of a domain obtained by an AXFR query
  -recurse[:depth] [-no-recurse]
     Recursively verify delegated subdomains to level [depth] (default is -no)
  -show-single-ns [-hide-single-ns]
     Report subdomain delegations that only have a single name server (default)
  -show-dangling-cnames [-hide-dangling-cnames] [REMOTE-DOMAIN [REMOTE-DOMAIN]]
     Report CNAMEs that point to non-existent out-of-zone domain names or
     domain names with no RRs (default)
  -show-chained-cnames [-hide-chained-cnames]
     Display each out-of-zone chained CNAME (default is -hide)
  -query-external-domains [-no-query-external-domains]
     Issue DNS queries for domains in zones external to -V DOMAIN (default)
  -check-del [-no-check-del]
     Check delegation of all discovered NS RRs (default)
  -debug[:directory] [-no-debug]
     Prevent removal of temp files in /tmp or [directory] (default is -no)
     Zone data temp file is re-verified instead of making a new AXFR query.
  -glue-level [LEVEL]
     Specify/display the number (0-30) of chained inter-subzone delegations
     that are permitted before optional parent-zone glue RRs become mandatory.
     Default LEVEL is 3.

This is ./h2n v2.61rc8

tbrowder@github 0 comments

WebService::MailChimp an interface to MailChimp's RESTful Web API v3 using Web::API

There already exist some modules to use the MailChimp API but in older versions:

  • WWW::Mailchimp (v1.3)
  • Mail::Chimp2 (v2)
  • Mail::Chimp (v1.2)

I'm struggling with what namespace to use. My default is WebService::MailChimp, which builds on the shoulders of others that have used the Web::API role.

I know WWW:: is not a good choice.

And the whole Mail:: namespace seems like it would be best for modules that handle e-mail, as opposed to interacting with a web API to send email newsletters. It just seems a little too "cute" to use Mail::Chimp...

Thoughts welcome on this and coding. Thanks!

jdigory@github 2 comments

Eber A program for freelance translators working on first-come first-served basis websites. Eber grabs translations, fast, and keeps an eye out for new ones.


A program for freelance translators working on first-come first-served basis websites. Eber grabs translations, fast, and keeps an eye out for new ones. Served in one binary installed to $PATH: "eber".

Written in an object-oriented, modular way which should make it extendible...


At this stage, the software makes quite a few assumptions about the system it is running on, namely: it includes a handful of system calls to the Mac OS X text-to-speech utility "say" which is used to attract the users attention (this is implemented in the main::attention() subroutine). Suggestions on more platform independent ways of performing this attention grabbing would be helpful.

Further limitations stem from the small number of websites the software has been tested with, and the rigidity of said websites API or lack of API. Work has gone into providing as much abstraction as possible in the higher level executable and library (src/bin/eber/ and src/lib/Eber namespace) while pushing website specific code down into the src/lib/[WEBSITE_NAME] folders and subsequent packages. This is intended to facilitate addition of other websites in the future.

I would appreciate input regarding any and all code, but specifically about the distribution and installation part, since I am very much inexperienced on that front, and would like to eventually upload this to CPAN so as to be able to share and maintain easily with fellow translators.

Any help is welcome. Thanks for reading.

On the surface

This software watches specific freelance translation websites for new translations, finds the best one corresponding to various criteria (price minimum, price per word...), and starts it as fast as possible.

It then watches for new translations again, and compares them to the one in progress. If the current translation started recently (i.e. you haven't been working on it too long, 60 seconds by default) and it finds a translation that has a better price in contrast to the current translations by a certain ratio (2 by default), then it stops the current translation, and starts the better one. It then starts comparing again, and so on and so forth.

This "surface" flow behavior is coded in the main "eber" bin file.


The software provides two sets of classes: the "Watcher" class and the "Translation" class. These are subclassed by the website packages.

The watchers have methods "login", "refresh", and an "error_handler". They have attributes "latency", "errors", "request", and a "translation", which is an object of the class "Translation" (in the corresponding website package).

The translations have methods "start", "stop", "is_better_than" for comparing. They have attributes "id", "price", "info" (human readable summary of what is in the object), "unwanted" (reason why the translation doesn't fit criteria), "duration" (allowed to complete it), "start_request", "stop_request" (HTTP::Request objects needed to start and stop it), and "url".

Controlling the program

Command line options (see below) control key behavior. For further configuration, consider modifying hard coded options located in the file.


 # usage: $eber [--options]

 # option             default   explanation

 # --minprice or -m   0         price in dollars under which to discard translations
 # --gengo or -g      yes       run the gengo watcher (incompatible with below)
 # --oht or -o        no        run the oht watcher (incompatible with above)
 # --pro or -P        no        only start well priced per word jobs (> $0.08)
 # --verbose or -v    1         every instance adds one to verbose level, up to 3: 
 #                              logs more info, doesn't change screen output!
 # --quiet or -q      no        print (next to) nothing on screen, no sound
 # --dry or -d        no        perform a dry run (no start or stop requests)
 # --diagnose or -D   no        perform a diagnostic (print one scrape and exit)

 # --help or -?       .         print usage information
 # --man              .         manual page

Marcool04@github 0 comments

Moose::Tutorial::DesignPatterns This module would contain the base classes for a thorough review of design patterns using Moose.

Through review of Moose, I have reviewed Design Patterns using Moose. This is a thorough review. All Design Patterns are used in a simple CLI demo, akin to those in College. The Design Patterns appear, to me, to work effectively. The design patterns included are :

Creational patterns

Abstract factory pattern
Builder pattern
Factory method pattern
Lazy initializatoion pattern
Object pool
Prototype pattern
Singleton pattern

Structural patterns

Adaptor pattern
Bridge pattern
Composite pattern
Decorator pattern
Facade pattern
Flyweight pattern
Proxy pattern

Behavioral patterns

Command pattern
Mediator pattern
State machine
Strategy pattern
Observer pattern
Visitor pattern
Template pattern
Memento pattern

Concurrency patterns

Active Object pattern
Double Checked Locking
Guarded Suspension
Thread Local Storage

I hope to include this as a Tutorial in the Tutorial namespace beneath Moose. When I looked at the classic Tetris example on MetaCPAN, this was in the Tutorial namespace. I agree this is a good place to put my Design Patterns.

At this time I have submitted the Moose design patterns to GitHub for careful review.


jmcveigh@github 2 comments

Weasel Web driver abstraction library


This library is inspired by php's Mink library which wraps multiple web driving frameworks extensibly (through driver plugins).


Additionally, Weasel features a BDD testing tool plugin for Pherkin (Test::BDD::Cucumber) similar to the way Mink plugs into Cucumber through it's MinkExtension.

Improvements over php's Mink

Other features (not found in Mink):

  1. DOM Element search by mnemonic with extensible mnemonic set
  2. Customizable classes for returned WebElements based on pattern matching against the DOM Element


The reason to want to implement (1) is that some web development frameworks (notably Dojo) rewrites SELECT tags into an entirely different DOM tree. The purpose of having extensible mnemonic sets is to support these DOM-rewriting libraries without the need to adjust the (BDD) test suite when switching frameworks as well as to support sharing of matching patterns between projects.

The second idea comes from the same background: Finding an option in a SELECT requires different code than finding an option in a Dojo-rewritten element. By creating this infrastructure, it becomes possible to provide a single interface to two totally different DOM trees with the same UI semantics. Registration of a new element class can be as simple as:

register_widget_handler('Weasel::Widgets::Dojo::Select', 'Dojo',
                     tag => 'span',
                     class => 'dijitSelect');
print ref $session->find('*select', { labelled => 'Country' });
# now prints "Weasel::Widgets::Dojo::Select"


While writing LedgerSMB's BDD test suite, I'm finding dire need for hooks into the various parts of the web driver testing code. In addition, heavily customizing the web driver - which is the alternative to this project - doesn't feel good, because it doesn't allow sharing and redistributing solutions easily.


When looking at the sources please note that this project has just started. However, it will be used as the web testing framework for the LedgerSMB project in a week or two (I'm currently refactoring the test code to move over to this project).

ehuelsmann@github 0 comments

Die::Eventually Gathers multiple deaths - in scope - to be executed going out of scope

The idea is to avoid this pattern:

my @errors;

push @errors, "foo" if bar();

push @errors, "bar" if baz();

die join "\n", @errors if @errors;

It's not a dramatic reduction in code lines, but it is a nice reduction in programmer brain usage on mundane everyday boring things.

torbjorn@github 1 comment

unifdef+ preprocessor simplification

I wrote a module in Perl, and have been given permission to upload it to CPAN from my company. The module is unifdef+ which basically simplifies preprocessor conditionals. It is modeled after unifdef (a standard bash utility), only it does a few more things, including simplifying conditionals, simplifying compound conditionals, etc. It's also tolerant of spacing, comments, multilines, etc.

This is my first perl module (and in fact one of my first perl programs), so please bear with me. I had a coworker who knows more about perl go through the code and clean it up so it should be good in that sense. Also, it's been tested and used for a while now, so it should be stable for C. It also has support to handle other languages. The Kconfig support works well, though it is limited when one Kconfig file includes another (it cannot transfer known tokens from one file to another). I did a hack for Makefiles, which uses c-style comments, though I would like to add proper support eventually.

I've read through the standard documentation, but I have to say, I'm still confused about a few things like namespaces (the document doesn't clarify what it means by them, etc), and the main document doesn't mention things like META.json, whereas other documents seem to make reference to them. I've written one, but I'm not sure if it's correct. I also have a .pm file, and a .pl file which uses the library. I'm uncertain whether this is a single submission or multiple submissions, etc.

I'm wondering if I could get some general guidance on how to name the module, what namespace to use, and other things as I come across them.



julvr@github 2 comments

XML::LibXMLSec wrapper for xmlsec1

XML::LibXMLSec - wrapper for some of xmlsec1

This distribution wraps a few functions from xmlsec1.

At the moment, only the code needed to verify a XML Signature against a PEM certificate is implemented.

I may write an Alien::LibXMLSec distribution in the future, but currently you need to have xmlsec1 already installed, including the headers (on most Linux distributions, this means that you need to install the -dev package).

The build instructions are the usual ones:

perl Makefile.PL
make test
make install

Things I'd like feedback on:

  • Is the XS sensible? This is my first attempt at XS, and I mostly copied it from XML::LibXML and XML::LibXSLT (the perl-* files are copied from the latter)
  • Is there anything obviously wrong with the code?

General style and cleanliness suggestions are welcome.

dakkar@github 2 comments

Archive::MultiSplit::DrCopy Create multi-volume disaster recovery copies of large filesystems


Archive::MultiSplit::DrCopy is a perl module that provides a backup and recovery tool for large filesystems -- those that are much larger than the maximum size of existing hard disks.

It permits a complete copy to be taken of a dataset, splitting that copy up across different files and different target filesystems. This makes it useful, for example, for creating an offline copy of the dataset where the copy is spread across various removable disks.

It is targeted to situations where network-based disaster recovery options are not feasible.

The Perl script drcopy(1) acts as a wrapper to this module and is the normal way of invoking it.


   As documented in Archive::MultiSplit(3pm), the functions
   getopt_long_options() and parse_options() can be used to manage command
   line options to control this module.  In addition to those mentioned in
   Archive::MultiSplit(3pm) and Archive::MultiSplit::Interactive(3pm),
   this module understands the following options.

   --mode value
           Selects the operating mode.  value may be either backup or
           restore.  If not specified, the user will be prompted.

   --type value
           Selects the source type (during backup) or target type (during
           restore).  The value may be either dataset or directory. The
           latter should be used if you want to back up a directory
           hierarchy that is not a separate dataset or filesystem.
           If not specified, the user will be prompted.

   --encoding value
           During backup, this selects the data encoding mechanism.  A
           value of zfs_send means that the zfs(1) "send" command will be
           used to construct the data stream.  A value of tar means that
           the tar(1) command will be used. If not specified, the user will be

           zfs_send is only permitted with --type=dataset.


drcopy(1), Archive::MultiSplit(3pm), Archive::MultiSplit::Interactive(3pm)

gdreade@github 2 comments

Archive::MultiSplit::Interactive split(1) and cat(1) operations across multiple volumes or filesystems, with interactive feedback


Archive::MultiSplit::Interactive is a subclass of Archive::MultiSplit, and ensures that when a new target directory (for split mode) or a new source directory (for join mode) is needed, the user is prompted to provide it.

(I'm not sure here on prepan if this belongs in a different module submission, but there is also a script that drives this module with its own manpage, multisplit(1)). Details on it follow

SYNOPSIS FOR multisplit(1)

   multisplit -man

   multisplit [options]  recreated_data_stream

DESCRIPTION FOR multisplit(1)

multisplit and multicat provide a command-line interface to the Archive::MultiSplit::Interactive perl module.

multisplit will read stdin and split its contents across files and filesystems, prompting the user when a new target directory needs to be identified.

multicat will read archives created with multisplit, concatenate them, and write them to stdout, prompting the user when a new source directory needs to be identified.

OPTIONS FOR multisplit(1)

   -help   Print a brief help message and exit.

   -man    Print the manual page and exit.

   -a --suffix-length=N
           Use suffixes of length N (defaults to 6).

   -b --max-file-size=byte_count[K|k|M|m|G|g]
           Create split files byte_count in length. If k or K is appended
           to the number, the file is split into byte_count kilobyte
           pieces.  If m or M is appended to the number, the file is split
           into byte_count megabyte pieces.  If g or G is appended to the
           number, the file is split into byte_count gigabyte pieces.  The
           default is 2GB.

           Specify the base name of the split file. The default is ’x’.

           In split mode, ensure that at least the specified number of
           kilobytes are left available on the target filesystem.  By
           default, this is 1024kB (1MB).

           --min-free-kb, --max-data-percentage, and --max-data-kb are not
           mutually exclusive; writing will stop when any of the limits
           are exceeded.

           In split mode, do not exceed the given percentage full value on
           the target filesystem.  By default this is 100%.  The
           calculation does not consider any space reserved for the
           superuser. For example, if 5% is reserved for the superuser,
           multisplit will by default fill the volume to 100%, not 105%
           (assuming --min-free-kb was set to zero).

           --min-free-kb, --max-data-percentage, and --max-data-kb are not
           mutually exclusive; writing will stop when any of the limits
           are exceeded.

           In split mode, limit the amount of data written per volume to a
           maximum of value kB.  This option is used primarily for testing
           and is of limited production value.

           --min-free-kb, --max-data-percentage, and --max-data-kb are not
           mutually exclusive; writing will stop when any of the limits
           are exceeded.

           Specify the operating mode. Normally this is determined
           automatically from the name of the command.

           Normally, multisplit and multicat will change directory to the
           root directory ("/") prior to operations.  Specifying this flag
           will inhibit that behavior.

gdreade@github 0 comments