PrePAN

Sign in to PrePAN

PrePAN provides a place
to discuss your modules.

CLOSE

Requests for Reviews Feed

Weasel Web driver abstraction library

Description

This library is inspired by php's Mink library which wraps multiple web driving frameworks extensibly (through driver plugins).

Features

Additionally, Weasel features a BDD testing tool plugin for Pherkin (Test::BDD::Cucumber) similar to the way Mink plugs into Cucumber through it's MinkExtension.

Improvements over php's Mink

Other features (not found in Mink):

  1. DOM Element search by mnemonic with extensible mnemonic set
  2. Customizable classes for returned WebElements based on pattern matching against the DOM Element

Rationale

The reason to want to implement (1) is that some web development frameworks (notably Dojo) rewrites SELECT tags into an entirely different DOM tree. The purpose of having extensible mnemonic sets is to support these DOM-rewriting libraries without the need to adjust the (BDD) test suite when switching frameworks as well as to support sharing of matching patterns between projects.

The second idea comes from the same background: Finding an option in a SELECT requires different code than finding an option in a Dojo-rewritten element. By creating this infrastructure, it becomes possible to provide a single interface to two totally different DOM trees with the same UI semantics. Registration of a new element class can be as simple as:

register_widget_handler('Weasel::Widgets::Dojo::Select', 'Dojo',
                     tag => 'span',
                     class => 'dijitSelect');
print ref $session->find('*select', { labelled => 'Country' });
# now prints "Weasel::Widgets::Dojo::Select"

Use

While writing LedgerSMB's BDD test suite, I'm finding dire need for hooks into the various parts of the web driver testing code. In addition, heavily customizing the web driver - which is the alternative to this project - doesn't feel good, because it doesn't allow sharing and redistributing solutions easily.

Remarks

When looking at the sources please note that this project has just started. However, it will be used as the web testing framework for the LedgerSMB project in a week or two (I'm currently refactoring the test code to move over to this project).

ehuelsmann@github 0 comments

Die::Eventually Gathers multiple deaths - in scope - to be executed going out of scope

The idea is to avoid this pattern:

my @errors;

push @errors, "foo" if bar();

push @errors, "bar" if baz();

die join "\n", @errors if @errors;

It's not a dramatic reduction in code lines, but it is a nice reduction in programmer brain usage on mundane everyday boring things.

torbjorn@github 1 comment

unifdef+ preprocessor simplification

I wrote a module in Perl, and have been given permission to upload it to CPAN from my company. The module is unifdef+ which basically simplifies preprocessor conditionals. It is modeled after unifdef (a standard bash utility), only it does a few more things, including simplifying conditionals, simplifying compound conditionals, etc. It's also tolerant of spacing, comments, multilines, etc.

This is my first perl module (and in fact one of my first perl programs), so please bear with me. I had a coworker who knows more about perl go through the code and clean it up so it should be good in that sense. Also, it's been tested and used for a while now, so it should be stable for C. It also has support to handle other languages. The Kconfig support works well, though it is limited when one Kconfig file includes another (it cannot transfer known tokens from one file to another). I did a hack for Makefiles, which uses c-style comments, though I would like to add proper support eventually.

I've read through the standard documentation, but I have to say, I'm still confused about a few things like namespaces (the document doesn't clarify what it means by them, etc), and the main document doesn't mention things like META.json, whereas other documents seem to make reference to them. I've written one, but I'm not sure if it's correct. I also have a .pm file, and a .pl file which uses the library. I'm uncertain whether this is a single submission or multiple submissions, etc.

I'm wondering if I could get some general guidance on how to name the module, what namespace to use, and other things as I come across them.

Thanks,

John

julvr@github 2 comments

XML::LibXMLSec wrapper for xmlsec1

XML::LibXMLSec - wrapper for some of xmlsec1

This distribution wraps a few functions from xmlsec1.

At the moment, only the code needed to verify a XML Signature against a PEM certificate is implemented.

I may write an Alien::LibXMLSec distribution in the future, but currently you need to have xmlsec1 already installed, including the headers (on most Linux distributions, this means that you need to install the -dev package).

The build instructions are the usual ones:

perl Makefile.PL
make test
make install

Things I'd like feedback on:

  • Is the XS sensible? This is my first attempt at XS, and I mostly copied it from XML::LibXML and XML::LibXSLT (the perl-* files are copied from the latter)
  • Is there anything obviously wrong with the code?

General style and cleanliness suggestions are welcome.

dakkar@github 2 comments

Archive::MultiSplit::DrCopy Create multi-volume disaster recovery copies of large filesystems

DESCRIPTION

Archive::MultiSplit::DrCopy is a perl module that provides a backup and recovery tool for large filesystems -- those that are much larger than the maximum size of existing hard disks.

It permits a complete copy to be taken of a dataset, splitting that copy up across different files and different target filesystems. This makes it useful, for example, for creating an offline copy of the dataset where the copy is spread across various removable disks.

It is targeted to situations where network-based disaster recovery options are not feasible.

The Perl script drcopy(1) acts as a wrapper to this module and is the normal way of invoking it.

OPTIONS

   As documented in Archive::MultiSplit(3pm), the functions
   getopt_long_options() and parse_options() can be used to manage command
   line options to control this module.  In addition to those mentioned in
   Archive::MultiSplit(3pm) and Archive::MultiSplit::Interactive(3pm),
   this module understands the following options.

   --mode value
           Selects the operating mode.  value may be either backup or
           restore.  If not specified, the user will be prompted.

   --type value
           Selects the source type (during backup) or target type (during
           restore).  The value may be either dataset or directory. The
           latter should be used if you want to back up a directory
           hierarchy that is not a separate dataset or filesystem.
           If not specified, the user will be prompted.

   --encoding value
           During backup, this selects the data encoding mechanism.  A
           value of zfs_send means that the zfs(1) "send" command will be
           used to construct the data stream.  A value of tar means that
           the tar(1) command will be used. If not specified, the user will be
           prompted.

           zfs_send is only permitted with --type=dataset.

SEE ALSO

drcopy(1), Archive::MultiSplit(3pm), Archive::MultiSplit::Interactive(3pm)

gdreade@github 2 comments

Archive::MultiSplit::Interactive split(1) and cat(1) operations across multiple volumes or filesystems, with interactive feedback

DESCRIPTION

Archive::MultiSplit::Interactive is a subclass of Archive::MultiSplit, and ensures that when a new target directory (for split mode) or a new source directory (for join mode) is needed, the user is prompted to provide it.

(I'm not sure here on prepan if this belongs in a different module submission, but there is also a script that drives this module with its own manpage, multisplit(1)). Details on it follow

SYNOPSIS FOR multisplit(1)

   multisplit -man

   multisplit [options]  recreated_data_stream

DESCRIPTION FOR multisplit(1)

multisplit and multicat provide a command-line interface to the Archive::MultiSplit::Interactive perl module.

multisplit will read stdin and split its contents across files and filesystems, prompting the user when a new target directory needs to be identified.

multicat will read archives created with multisplit, concatenate them, and write them to stdout, prompting the user when a new source directory needs to be identified.

OPTIONS FOR multisplit(1)

   -help   Print a brief help message and exit.

   -man    Print the manual page and exit.

   -a --suffix-length=N
           Use suffixes of length N (defaults to 6).

   -b --max-file-size=byte_count[K|k|M|m|G|g]
           Create split files byte_count in length. If k or K is appended
           to the number, the file is split into byte_count kilobyte
           pieces.  If m or M is appended to the number, the file is split
           into byte_count megabyte pieces.  If g or G is appended to the
           number, the file is split into byte_count gigabyte pieces.  The
           default is 2GB.

   --base-name=name
           Specify the base name of the split file. The default is ’x’.

   --min-free-kb=value
           In split mode, ensure that at least the specified number of
           kilobytes are left available on the target filesystem.  By
           default, this is 1024kB (1MB).

           --min-free-kb, --max-data-percentage, and --max-data-kb are not
           mutually exclusive; writing will stop when any of the limits
           are exceeded.

   --max-data-percentage=value
           In split mode, do not exceed the given percentage full value on
           the target filesystem.  By default this is 100%.  The
           calculation does not consider any space reserved for the
           superuser. For example, if 5% is reserved for the superuser,
           multisplit will by default fill the volume to 100%, not 105%
           (assuming --min-free-kb was set to zero).

           --min-free-kb, --max-data-percentage, and --max-data-kb are not
           mutually exclusive; writing will stop when any of the limits
           are exceeded.

   --max-data-kb=value
           In split mode, limit the amount of data written per volume to a
           maximum of value kB.  This option is used primarily for testing
           and is of limited production value.

           --min-free-kb, --max-data-percentage, and --max-data-kb are not
           mutually exclusive; writing will stop when any of the limits
           are exceeded.

   --mode={split|join}
           Specify the operating mode. Normally this is determined
           automatically from the name of the command.

   --no-chdir
           Normally, multisplit and multicat will change directory to the
           root directory ("/") prior to operations.  Specifying this flag
           will inhibit that behavior.

gdreade@github 0 comments

Archive::MultiSplit split(1) and cat(1) operations across multiple volumes or filesystems

DESCRIPTION

The Multisplit traces its lineage to the UNIX split(1) and cat(1) programs. It was designed to fulfill a similar need (that is, splitting a data stream into multiple files and later concatenating them together), however with the ability to further split the files across multiple filesystems, not all of which need to be mounted at the time of invocation. This allows for the ability, for example, to split a data stream across multiple hard disks where only one target hard disk is mounted at a time.

In this documentation, the terms Volume and Filesystem are used interchangably.

Note that MultiSplit is a base class and must be subclassed before it is used. See the section below on OVERRIDABLE METHODS and the module Archive::MultiSplit::Interactive.

There are two key methods in this module:

  • split_input_stream

    This method takes a filehandle reference and reads from it, splitting the resulting data into separate files on a target volume. When the target volume is full, it will invoke get_next_target_directory to obtain the next target volume and repeat the process until EOF is read on the input filehandle.

  • join_output_stream

    This method takes a filehandle reference. It will invoke get_next_target_directory to get the location of the data files to read. It will then read each file in turn (based on baseFileName) and write the file contents to the provided filehandle. When the next file cannot be found on the current volume, it will invoke get_next_target_directory again and repeat the process. It will continue doing this until get_next_target_directory returns undef.

The default implementation of get_next_target_directory which will abort the program if it is called. This is the only method which MUST be overridden in a subclass.

SEE ALSO

   split(1), cat(1), multisplit(1), Archive::MultiSplit::Interactive(3pm)

gdreade@github 0 comments

Test::Mountebank Test::Mountebank - Perl client library for mountebank (see http://www.mbtest.org/)

The example in the synopsis builds an object structure that generates JSON code like the following, which can be sent to the running mountebank instance in a POST request.

{
    "port": 4546,
    "protocol": "http",
    "stubs": [
        {
            "predicates": [
                {
                    "equals": {
                        "method": "GET",
                        "path": "/foobar.json"
                    }
                }
            ],
            "responses": [
                {
                    "is": {
                        "body": {
                            "foo": "bar"
                        },
                        "headers": {
                            "Content-Type": "application/json"
                        },
                        "statusCode": 200
                    }
                }
            ]
        },
        {
            "predicates": [
                {
                    "equals": {
                        "method": "GET",
                        "path": "/qux/999/json"
                    }
                }
            ],
            "responses": [
                {
                    "is": {
                        "body": "{ \"error\": \"No such qux: 999\" }",
                        "headers": {
                            "Content-Type": "application/json"
                        },
                        "statusCode": 404
                    }
                }
            ]
        },
        {
            "predicates": [
                {
                    "equals": {
                        "method": "GET",
                        "path": "/foobar.html"
                    }
                }
            ],
            "responses": [
                {
                    "is": {
                        "body": "\n  \n    foobar\n  \n  \n    foobar\n  \n\n\n",
                        "headers": {
                            "Content-Type": "text/html"
                        },
                        "statusCode": 200
                    }
                }
            ]
        }
    ]
}

Compare the mountebank documentation at http://www.mbtest.org/docs/api/stubs and http://www.mbtest.org/docs/api/predicates. Currently at least, Test::Mountebank implements only the features of mountebank stubs that are most useful for simulating a REST API. There is only one type of predicate (equals) and only one type of response (is).

dagfinnr@github 0 comments

Pgtools a command-line tools for PostgreSQL operation

NAME

Pgtools - It's a yet another command-line tool for PostgreSQL operation.

SYNOPSIS

pg_kill

$ pg_kill -kill -print -mq "like\s'\%.*\%'" "192.168.32.12,5432,postgres,,dvdrental"
-------------------------------
Killed-pid: 11590
At        : 2016/03/21 01:32:29
Query     : SELECT * FROM actor WHERE last_name like '%a%';
Killed matched queries!

pg_config_diff

$ pg_config_diff  "192.168.33.21,5432,postgres,," "192.168.33.22,,,," "192.168.33.23,5432,postgres,,dvdrental"
           192.168.33.21           192.168.33.22           192.168.33.23
--------------------------------------------------------------------------------------------
max_connections          50                      100                     100
shared_buffers           32768                   16384                   65536
tcp_keepalives_idle      8000                    7200                    10000
tcp_keepalives_interval  75                      75                      10
wal_buffers              1024                    512                     2048

pg_fingerprint

$ pg_fingerprint queries_file
SELECT * FROM user WHERE id = ?;
SELECT * FROM user2 WHERE id = ? LIMIT ?;
SELECT * FROM user2 WHERE point = ?;
SELECT * FROM user2 WHERE expression IS ?;

DESCRIPTION

Pgtools is composed of 3 commands which is pg_kill, pg_config_diff, pg_fingerprint.

  • pg_kill shows the specified queries during execution by regular expression and other options. And also kill these specifid queries by -kill option.
  • pg_config_diff command needs more than 2 argument which is string to specify the PostgreSQL databases.
  • pg_fingerprint command converts the values into a placeholders.

LICENSE

Copyright (C) Otsuka Tomoaki.

This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.

AUTHOR

Otsuka Tomoaki <otsuka.t.2013@gmail.com>

tom--bo@github 1 comment

Fritz perl module for AVM Fritz!Box interaction via TR-064

Fritz is a set of modules to communicate with an AVM Fritz!Box (and possibly other routers as well) via the TR-064 protocol.

I wanted to initiate calls via commandline, but I only found GUI tools to do that or libraries in other languages than Perl, so I have built this library.

Luckily, the TR-064 protocol announces all available services via XML. So this module does some HTTP or HTTPS requests to find the router, query it's services and then calls them via SOAP. Parameter names and counts are verified against the service specification, but Fritz itself knows nothing about the available services or what they do.

mmitch@github 3 comments