PrePAN

Sign in to PrePAN

PrePAN provides a place
to discuss your modules.

CLOSE

Requests for Reviews Feed

OSA Modules supporting the Online Social Advocacy standards and APIs

Being social on the Internet has become a centralized activity that defies the natural manner of physical, real world, communications. The currently evolved on-line communication paradigm creates notable technical incongruence, fosters dramatic security incidents and most importantly strips information sender and recipient both of control over their data.

At TekAdvocates we are designing an online communication strategy based on what we call "Social Normalcy." This strategy is based on standards and API's we intend to make open once reasonably baselined, not on a single vertical application interface that locks anyone using it into a particular company's proprietary product set.

We have developed standards and APIs enough to have created a fully functioning prototype of our theories to demonstrate their viability. All our current code is written in Perl. (Though any language suffices provided the resulting program adheres to standards.) It follows naturally then that we would convert our efforts to a suite of modules supporting the referenced standards and APIs to aid all interested developers in quickly creating compliant applications.

Our efforts are called the "Online Social Advocate" standards and APIs because the end product is a localized service that centralizes, household, business, or other organization's data transfer needs within the confines of their personal space. The centralized service is known as that organization's data transfer advocate in that it manages routing, control and access of passed information within and outside the organization/household.

By definition of our effort, this is an entirely new code set and category of module on CPAN. As near as we can tell from reading the informational pages on submissions and searching CPAN, a top level OSA namespace is warranted. While the "OSA" name may mean nothing to most, it would be instantly recognizable to anyone looking to develop code compliant to these standards. The intention is for the OSA modules to evolve with the standards and as broader ways to use those standards are realized.

Our intention is to initially create 3 major namespaces: OSA - containing base modules that would be useful with any application using the OSA standards. OSA::App - containing modules useful when writing applications to initiate and receive data transfers. OSA::OCE - containing modules useful when authoring an "Online Communication Engine" that is the actual "advocate" server that manages and routes data.

We are currently planning to submit simply under the "OSA" umbrella. We are of course wide open to any advice offered in this regard. Other considerations were to submit individual submissions of "OSA", "OSA::App" and "OSA::OCE", but being that everything is under the new space of "OSA" that seemed to be the appropriate package level. Downloading "OSA" would be pointless without either OSA::App or OSA::OCE to create an application in which to use the base modules. One would never use OSA::App or OSA::OCE without the base modules of OSA.

Our thought is that eventually it could make sense to make OSA::App available separately from "OSA::OCE since OSA::App will likely be used significantly more often, however the size of OSA::OCE is going to initially be insignificant so packaging all together should not be an issue.

We have never submitted modules to CPAN before, so any feedback oriented toward helping us get this right would be appreciated.

Vranicoff@github 0 comments

PDLx::Algorithm::Center Various ways of centering a dataset

This module collects various algorithms for determining the center of a dataset into one place. It accepts data stored as PDL variables (piddles)

Currently it contains a single function, sigma_clip, which provides an iterative algorithm which successively removes outliers by clipping those whose distances from the current center are greater than a given number of standard deviations.

sigma_clip finds the center of a data set by:

  1. ignoring the data whose distance to the current center is a specified number of standard deviations
  2. calculating a new center by performing a (weighted) centroid of the remaining data
  3. calculating the standard deviation of the distance from the data to the center
  4. repeat at step 1 until either a convergence tolerance has been met or the iteration limit has been exceeded

The initial center may be explicitly specified, or may be calculated by performing a (weighted) centroid of the data.

The initial standard deviation is calculated using the initial center and either the entire dataset, or from a clipped region about the initial center.

sigma_clip can center sparse (e.g., input is a list of coordinates) or dense datasets (input is a hyper-rectangle) with or without weights. It accepts a mask which directs it to use only certain elements in the dataset.

The coordinates may be transformed using (PDL::Transform)[https://metacpan.org/pod/PDL::Transform]. This is mostly useful for dense datasets, where coordinates are generated from the indices of the passed hyper-rectangle. This functionality is not currently documented, as tests for it have not yet been written.

More information is available at the github repo page, https://github.com/djerius/PDLx-Algorithm-Center

djerius@github 0 comments

Geo::OLC API for Google's Open Location Codes

Open Location Codes are Google's open-sourced geohashing algorithm. They provide a nice set of APIs at https://github.com/google/open-location-code, but not for Perl.

Despite having worked with Perl since the Eighties, I've never contributed to CPAN, so I'm open to any recommendations about naming, packaging, code style, etc.

There is a module on Github that implements the same API (discovered after I wrote mine...), but it was apparently never submitted to CPAN: https://github.com/nkwhr/Geo-OpenLocationCode

jgreely@github 5 comments

Test::DocClaims Help assure documentation claims are tested

A module should have documentation that defines its interface. All claims in that documentation should have corresponding tests to verify that they are true. Test::DocClaims is designed to help assure that those tests are written and maintained.

It would be great if software could read the documentation, enumerate all of the claims made and then generate the tests to assure that those claims are properly tested. However, that level of artificial intelligence does not yet exist. So, humans must be trusted to enumerate the claims and write the tests.

How can Test::DocClaims help? As the code and its documentation evolve, the test suite can fall out of sync, no longer testing the new or modified claims. This is where Test::DocClaims can assist. First, a copy of the POD documentation must be placed in the test suite. Then, after each claim, a test of that claim should be inserted. Test::DocClaims compares the documentation in the code with the documentation in the test suite and reports discrepancies. This will act as a trigger to remind the human to update the test suite. It is up to the human to actually edit the tests, not just sync up the documentation.

The comparison is done line by line. Trailing white space is ignored. Any white space sequence matches any other white space sequence. Blank lines as well as "=cut" and "=pod" lines are ignored. This allows tests to be inserted even in the middle of a paragraph by placing a "=cut" line before and a "=pod" line after the test.

Additionally, a special marker, of the form "=for DC_TODO", can be placed in the test suite in lieu of writing a test. This serves as a reminder to write the test later, but allows the documentation to be in sync so the Test::DocClaims test will pass with a todo warning. Any text on the line after DC_TODO is ignored and can be used as a comment.

Especially in the SYNOPSIS section, it is common practice to include example code in the documentation. In the test suite, if this code is surrounded by "=begin DC_CODE" and "=end DC_CODE", it will be compared as if it were part of the POD, but can run as part of the test. For example, if this is in the documentation

  Here is an example:

    $obj->process("this is some text");

this could be in the test

  Here is an example:

  =begin DC_CODE

  =cut

  $obj->process("this is some text");

  =end DC_CODE

Example code that uses print or say and has a comment at the end will also match a call to is() in the test. For example, this in the documentation POD

  The add function will add two numbers:

    say add(1,2);            # 3
    say add(50,100);         # 150

will match this in the test.

  The add function will add two numbers:

  =begin DC_CODE

  =cut

  is(add(1,2), 3);
  is(add(50,100), 150);

  =end DC_CODE

When comparing code inside DC_CODE markers, all leading white space is ignored.

When the documentation file type does not support POD (such as mark down files, *.md) then the entire file is assumed to be documentation and must match the POD in the test file. For these files, leading white space is ignored. This allows a leading space to be added in the POD if necessary.

ScottLee1260@github 1 comment

Bitcoin::Client Implements bitcoin-cli methods

A module for bootstrapping Bitcoin Core RPC client calls (bitcoin-cli).

The idea is that someone can install the module from CPAN and immediately start coding against and bitcoind instance in an OO way with similar syntax to bitcoin-cli without compiling or installing many perl dependencies (just Moo and JSON::RPC::Client, thinking about taking out Moo).

Right now the module is just named BTC. But I think Bitcoin::Client or Bitcoin::Cli would be more appropriate.

There are a couple of other bitcoin modules that are similar but the syntax is not as simple and there are many more dependencies.

whinds84@github 2 comments

File::JSON::Slurper A mashup of File::Slurper and JSON::MaybeXS

On the Nth time of using File::Slurper and JSON::MaybeXS, I decided it would be handy to have a module which just wrapped these two things up.

It would have the reverse operation as well: write_json.

I had a quick skim of CPAN, and couldn't find a clean simple interface like this. There's YANICK's File-Serialize, but it's not quite what I'm after.

I've tried various names for the functions, such as decode_json_from_file, and am still not sure. I picked names that mirror File::Slurper. So suggestions for naming are welcome!

neilbowers@github 1 comment

Syntax::Keyword::Try a try/catch/finally syntax for perl

This module provides a syntax plugin that implements exception-handling semantics in a form familiar to users of other languages, being built on a block labeled with the try keyword, followed by at least one of a catch or finally block.

As well as being providing a handy syntax for this useful behaviour, this module also serves to contain a number of code examples for how to implement parser plugins and manipulate optrees to provide new syntax and behaviours for perl code.

KEYWORDS

try

try {
   STATEMENTS...
}
...

A try statement provides the main body of code that will be invoked, and must be followed by either a catch statement, a finally statement, or both.

Execution of the try statement itself begins from the block given to the statement and continues until either it throws an exception, or completes successfully by reaching the end of the block. What will happen next depends on the presence of a catch or finally statement immediately following it.

The body of a try {} block may contain a return expression. If executed, such an expression will cause the entire containing function to return with the value provided. This is different from a plain eval {} block, in which circumstance only the eval itself would return, not the entire function.

The body of a try {} block may contain loop control expressions (redo, next, last) which will have their usual effect on any loops that the try {} block is contained by. As of the current implementation however, these will result in a warning

Exiting eval via redo at FILE line LINE.

The use of no warnings 'exiting' can avoid this.

The parsing rules for the set of statements (the try block and its associated catch and finally) are such that they are parsed as a self- contained statement. Because of this, there is no need to end with a terminating semicolon.

catch

...
catch {
   STATEMENTS...
}

A catch statement provides a block of code to the preceeding try statement that will be invoked in the case that the main block of code throws an exception. The catch block can inspect the raised exception by looking in $@ in the usual way.

Presence of this catch statement causes any exception thrown by the preceeding try block to be non-fatal to the surrounding code. If the catch block wishes to optionally handle some exceptions but not others, it can re-raise it (or another exception) by calling die in the usual manner.

As with try, the body of a catch {} block may also contain a return expression, which as before, has its usual meaning, causing the entire containing function to return with the given value. The body may also contain loop control expressions (redo, next or last) which also have their usual effect.

If a catch statement is not given, then any exceptions raised by the try block are raised to the caller in the usual way.

finally

...
finally {
   STATEMENTS...
}

A finally statement provides a block of code to the preceeding try statement (or try/catch pair) which is executed afterwards, both in the case of a normal execution or a thrown exception. This code block may be used to provide whatever clean-up operations might be required by preceeding code.

Because it is executed during a stack cleanup operation, a finally {} block may not cause the containing function to return, or to alter the return value of it. It also cannot see the containing function's @_ arguments array (though as it is block scoped within the function, it will continue to share any normal lexical variables declared up until that point). It is protected from disturbing the value of $@. If the finally {} block code throws an exception, this will be printed as a warning and discarded, leaving $@ containing the original exception, if one existed.

TODO

  • Value semantics. It would be nice if a do {}-wrapped try set could yield a value, in the way other similar constructs can. For example

    my $x = do {
       try { attempt(); "success" }
       catch { "failure" }
    };
    

    A workaround for this current lack is to wrap the try{} catch{} pair in an anonymous function which is then immediately executed:

    my $x = sub {
       try { attempt(); return "success" }
       catch { return "failure" }
    }->();
    
  • Suppress the exiting warning when using loop control systems in the try {} block. This is sligtly nontrivial because an inplace replacement of an OP_LAST to an OP_LINESEQ containing the new hints COP isn't possible due to differing op types. It will require more careful re-splicing into parent trees.

SEE ALSO

(At some point I'll write in here a list of other CPAN modules providing similar ideas, and compare the features. It's a little tricky to do that yet before the featureset is properly defined; specifically with respect to items still TODO)

AUTHOR

Paul Evans

leonerd@github 4 comments

kenvperl script written in perl and XS to dump the kernel environment of FreeBSD

This script retrieves kenv values from FreeBSD using XS. By default the script will look for all elements containing "system", change this to the string you are looking for. The XS part written in C contains code from the original kenv written by Peter Wemm peter@freebsd.org. Current script is work of Laszlo Danielisz with a lot of help from Patrick Mullen.

danielisz@github 3 comments

Grep::Query 'grep' a list with a logical query

In my tools I occasionally have the need to allow an end user to write selection code for various types of lists.

For most flexibility I implemented this as a language which allows the user to write a logical query using AND/OR/NOT and string/numerical operators such as regexps and the basic ==, !=, >, >=, <, <= comparisons.

It works on either lists of plain scalars, or with an extra 'field accessor', on lists of arbitrary hashes/objects.

After some rudimentary discussion on module-authors@perl.org, the name Grep::Query was suggested which I think is fairly apt, but it's not written in stone if there are good reasons to have something better.

Any opinions welcome.

TIA,

ken1

kenneth-olwing@github 0 comments

App::H2N A public copy of the h2n scripts used in the book "DNS and Bind"

The code is very helpful for anyone running the 'bind' DNS server. The '/etc/hosts' file format is used for the host database, and 'h2n' and other programs are used to generate required 'bind' files and to provide various information about the system. Due to the many options available with 'h2n', the user will quickly find one or more configuration files may be used to simplify capturing oft-used options and documenting one or more uses of the program. It is recommended that both the input database files and configuration files be kept under version control.

The package organization is still in work to follow CPAN standards, and some tests will be added before the collection is uploaded.

The output from running 'h2n --help' is shown below.

Usage:  h2n [zone creation options] | -V [zone verification options]

The zone creation options are:
  -A Don't create name server data for aliases in the host table
  -a NET[:SUBNETMASK|/CIDRsize [mode=S]] [NET ...]
     Add hostname data on NET to -d DOMAIN but without PTR data
     mode=S  Allow /8-24 network to be a supernet to smaller-classed nets
  -B PATH
     Set absolute directory path where boot/conf files will be written
  -b BOOTFILE
     Use BOOTFILE instead of the default: ./named.boot (BIND 4)
  -C COMMENT-FILE
     Create RRs using special host file comments as keys into COMMENT-FILE
  +C PRE-CONFFILE
     Prepend contents of PRE-CONFFILE to the BIND 8/9 conf file (+c option)
  -c REMOTE-DOMAIN [mode=[A][I][D[Q]][HS]] [REMOTE-DOMAIN [mode=...]
     Add CNAMEs which point to REMOTE-DOMAIN
     mode=A  Create additional CNAMEs for aliases in REMOTE-DOMAIN
         =I  REMOTE-DOMAIN is an intra-zone subdomain of -d DOMAIN
         =D  Defer CNAMEs; name conflicts prefer -d DOMAIN over REMOTE-DOMAIN
         =Q  Don't report name conflicts that prevent deferred CNAME creation
         =H  enable -hide-dangling-cnames REMOTE-DOMAIN option
         =S  enable -show-dangling-cnames REMOTE-DOMAIN option
  +c [CONFFILE] [mode=S|M]
     Use CONFFILE instead of the default: ./named.conf (BIND 8/9)
     mode=S  Create CONFFILE with zone entries in single-line format (default)
         =M  Create CONFFILE with zone entries in multi-line format
  -D [FILE]
     Create delegation information to link in with your parent zones
  -d DOMAIN [db=FILE1] [spcl=FILE2] [mode=D|Q]
     Create zone data file for DOMAIN
     db=FILE1    Override default filename of db.LABEL, e.g., label.movie.edu
     spcl=FILE2  Override default filename of spcl.LABEL for existing RRs
     mode=D      Set default domain of unqualified hostnames to DOMAIN
         =Q      Silently ignore hostnames that do not match DOMAIN
  -e EXCLUDED-DOMAIN [EXCLUDED-DOMAIN]
     Exclude hostfile data with names in EXCLUDED-DOMAIN
  -f FILE
     Read command line options from FILE
  -H HOSTFILE
     Use HOSTFILE instead of /etc/hosts (read STDIN if HOSTFILE is `-')
  -h HOST
     Set HOST in the MNAME (master name server) field of the SOA record
  -I [ignore|warn|audit|audit-only|warn-strict|fail|strict] [rfc2782]
     Control level and type of various RFC conformance checks
     ignore       Disables checking of domain names and zone data consistency
     warn         Issue warning when hostnames contain illegal characters
     audit        Check zone data for integrity and RFC compliance + `warn'
     audit-only   Check zone data integrity without the `warn' check
     warn-strict  Warn about single-character hostnames + `warn' + `audit'
     fail         Reject hostnames with illegal characters + `audit'
     strict       Reject single-character hostnames + `fail' + `audit'
     rfc2782      Check SRV RRs for `_service._protocol' labels in owner names
  -i NUM
     Set the serial number of all created/updated zone files to NUM
  -L NUM
     Set file handle limit to NUM
  +L [LOG-SPEC]
     Add a logging specification to the BIND 8/9 config files
  -M [no-mx|smtp|no-smtp]
     Restrict the generation of MX records.  No argument means that MX
     records will not be generated under any circumstances.  Otherwise,
     set the default action which can be overridden on a host-by-host basis.
     no-mx    Do not generate any MX records
     smtp     Only generate the self-pointing MX record
     no-smtp  Only generate the global MX record(s) from -m option(s)
  -m WEIGHT:MX-HOST [WEIGHT:MX-HOST]
     Include MX record for each host not having [no mx]/[smtp] comment flags
  +m [D|C|P|CP]
     Control RR generation method for multi-homed hosts
     D   Use default behavior (A RRs for all names, CNAMEs for common aliases)
     C   Create A RRs for canonical name and 1st alias, CNAMEs for all others
     P   Create PTR RRs that point to A RR of 1st alias instead of canonical
     CP  Combine `C' and `P' flags
  -N SUBNETMASK|/CIDRsize
     Apply SUBNETMASK/CIDRsize as default value for subsequent -n/-a options
  -n NET[:SUBNETMASK|/CIDRsize [mode=S] [domain=DOMAIN] [ptr-owner=TEMPLATE]]
        [db=FILE1] [spcl=FILE2]
     Create zone data for each class-A/B/C subnet of NET for network sizes
     /8 to /24.  For /25-32 networks, create zone data to support RFC-2317
     delegations to DOMAIN with the owner names of the PTR records fitting
     the TEMPLATE pattern.
     mode=S      Allow /8-24 network to be a supernet to smaller-classed nets
     db=FILE1    Override default filename of db.NET, e.g., db.192.168.1
     spcl=FILE2  Override default filename of spcl.NET for existing RRs
  -O OPTION OPTION-ARGS
     Add option specifications to BIND 4 boot files
  +O [OPTION-SPEC]
     Add option specifications to BIND 8/9 conf files
  -o [REFRESH]:[RETRY]:[EXPIRE]:[MINIMUM]:[DEFAULT-TTL]
     Set SOA time intervals
 +om OPTION OPTIONS-ARGS
     Adds zone-specific options to BIND 8/9 master conf
 +os OPTION OPTIONS-ARGS
     Adds zone-specific options to BIND 8/9 slave conf
  -P Preserve upper-case characters of hostnames and aliases in the host table
  -p REMOTE-DOMAIN [mode=A|P] [REMOTE-DOMAIN [mode=...]
     Create only PTR data for REMOTE-DOMAIN hosts
     mode=A  Required flag if REMOTE-DOMAIN's forward-mapping zone built w/ -A
         =P  Enables alternate method of PTR generation as described for +m P
  -q Work quietly
  -r Enable creation of RP (Responsible Person) records
  -S SERVER [SERVER]
     Adds NS record to zone(s) for the last preceding -d option or -n option(s)
  +S [enable|disable]
     Control class-A/B/C NETs to act as supernets for subsequent -n/-a options
  -s SERVER [SERVER]
     Adds NS record to zones for -d option and all -n options
  -T [mode=M] [RR='DNS RR' [RR='...']] [ALIAS='name [TTL]' [ALIAS='...']]
     Add additional top-of-zone-related records to DOMAIN of the -d option
     mode=M  Add the global MX record(s) specified in the -m option
     RR=     Add 'DNS RR' with owner field set to whitespace or to `@'
     ALIAS=  Add CNAME RR with owner field of 'name' & RDATA field set to `@'
  -t [O|P]
     Generate TXT records from host table comment fields excluding h2n flags
     O   Only generate a TXT record if an explicitly quoted string is present
     P   Prefer explicitly quoted text but otherwise act in the default manner
  +t DEFAULT-TTL [MINIMUM-TTL]
     Create $TTL directives & SOA Negative Cache TTL
  -u CONTACT
     Set CONTACT as the mail addr. in the SOA RNAME (responsible person) field
  -v Display the version number of h2n
  -W PATH [mode=O]
     Set absolute directory path where `spcl'/zone files will be read/written
     mode=O  Set old (pre-v2.60) behavior where PATH only appears in boot/conf
             `directory' statements and `spcl' $INCLUDE directives.
  -w Generate WKS records for SMTP/TCP for every MX RRset
  -X Generate only the BIND conf/boot file(s) and exit
  -y [mode=[D|M]
     Set SOA serial numbers to use date/version format
     mode=D  Set day format of YYYYMMDDvv allowing 100 versions/day (default)
         =M  Set month format of YYYYMMvvvv allowing 10,000 versions/month
  -Z ADDRESS [ADDRESS]
     Specify ADDRESS of primary from which to load unsaved zone data
  -z ADDRESS [ADDRESS]
     Specify ADDRESS of primary from which to load saved zone data
  -show-single-ns [-hide-single-ns]
     Report subdomain delegations that only have a single name server if
     auditing is in effect (default)
  -show-dangling-cnames [-hide-dangling-cnames] [REMOTE-DOMAIN [REMOTE-DOMAIN]]
     Report CNAMEs that point to non-existent external domain names or
     domain names with no RRs if auditing is in effect (default)
  -show-chained-cnames [-hide-chained-cnames]
     Display each out-of-zone chained CNAME if auditing (default is -hide)
  -query-external-domains [-no-query-external-domains]
     Make DNS queries for domain names in zones external to -d DOMAIN (default)
  -debug[:directory] [-no-debug]
     Prevent removal of temp files in /tmp or [directory] (default is -no)
  -glue-level [LEVEL]
     Specify/display the number (0-30) of chained inter-subzone delegations
     that are permitted before optional parent-zone glue RRs become mandatory
     if auditing is in effect.  Default LEVEL is 1.

The zone verification options are:
  -f FILE
     Read command line options from FILE
  -v Display the version number of h2n
  -I [audit|audit-only]
     Control level and type of various RFC conformance checks
     audit       Check zone data integrity & report names with illegal chars.
     audit-only  Check zone data integrity & ignore names with illegal chars.
  -V DOMAIN [DOMAIN]
     Verify the integrity of a domain obtained by an AXFR query
  -recurse[:depth] [-no-recurse]
     Recursively verify delegated subdomains to level [depth] (default is -no)
  -show-single-ns [-hide-single-ns]
     Report subdomain delegations that only have a single name server (default)
  -show-dangling-cnames [-hide-dangling-cnames] [REMOTE-DOMAIN [REMOTE-DOMAIN]]
     Report CNAMEs that point to non-existent out-of-zone domain names or
     domain names with no RRs (default)
  -show-chained-cnames [-hide-chained-cnames]
     Display each out-of-zone chained CNAME (default is -hide)
  -query-external-domains [-no-query-external-domains]
     Issue DNS queries for domains in zones external to -V DOMAIN (default)
  -check-del [-no-check-del]
     Check delegation of all discovered NS RRs (default)
  -debug[:directory] [-no-debug]
     Prevent removal of temp files in /tmp or [directory] (default is -no)
     Zone data temp file is re-verified instead of making a new AXFR query.
  -glue-level [LEVEL]
     Specify/display the number (0-30) of chained inter-subzone delegations
     that are permitted before optional parent-zone glue RRs become mandatory.
     Default LEVEL is 3.

This is ./h2n v2.61rc8

tbrowder@github 0 comments