IO::Concurrent Concurrent I/O framework
When implements concurrent non-blocking I/O using select(2) or others (without AnyEvent/IO::AIO), it makes many complex procedural codes. This framework makes easy to write it by scenarios. AnyEvent and IO::AIO are great tools, but I think these are too much for this case. IMO, this case is not needed async I/O or/and event loops.
Win32::Backup::Robocopy This module is a wrapper around robocopy.exe and try to make it's behaviour as simple as possible using a serie of sane defaults while letting you the possibility to leverage the robocopy.exe invocation in your own way.
The module offers two ways of bakup: the, default, simplest one will copy all files to one folder, named as the name of the backup (the mandatory name parameter used while creating the object). All successive invocation of the backup will write into the same destination folder.
If you instead specify the history parameter as true during construction, then inside the main destination folder ( always named using the name ) there will be one folder for each run of the backup named using a timestamp like 2022-04-12T09-02-36
Webservice::ForceManager Wrapper around Force Manager JSON API
A wrapper around the 'optimised for field sales' crm ForceManager - specifically the JSON API used to maintain the background data.
Structurally I plan to make it one base class (ForceManager.pm) and currently 13 ForceManager:: child classes due to the facility to set custom fields and requirement parameters which would be better served as a class definition than monolithic configuration.
I've already got a working version of this developed with assistance of ForceManager themselves, but it used something of a questionable approach.
I hope to use this as an easy on ramp to CPAN contributions as it's more a case of procedure and optimisation than hard coding problems.
Insights greatly appreciated!
Test::Skipper Skip tests that passed
Test::Skipper treats testing like a make target depending on no changes to the test.t script or the Module::To::Test since all significant tests passed.
passing() saves state indicating success when $aok is true.
skipper() runs one succeeding test to report the skipping and exits cleanly.
$aok is the only communication that "all" tests have passed, so peripheral tests may be included for information.
Is there an older wheel, that I missed?
JSON::Response::Inspector Perform introspection on the data structure returned by a JSON request
I'm trying to determine if something like this already exists. If not, I'll create a module for this function. See https://perlmonks.org/?node_id=1222269 for background and to see what this code outputs.
The module will help developers quickly assess json (or any other data structure) responses. It will print out a "merge" of each element in the data structure, without the actual data, so the developer can see all the fields at a glance to see the structure of the response.
SMS::Send::UK::BTSmartMessaging An SMS::Send driver that provides SMS message sending via the BT Smart Messaging API
SMS::Send::UK::BTSmartMessaging is a SMS::Send driver that provides SMS message sending via the BT Smart Messaging Tailored powered by Soprano HTTP API
Many thanks to the authors of the following modules that served as inspiration for this one:
Language::SIMPLE Simple Integratable Modular Programming Language Experiment
The SIMPLE Integrated Modular Programming Language Experiment.
This is an attempt at writing a script interpreter in another interpreted (perl) program. Is this really necessary? I don't know, probably not. Is it foolish? Probably. But lack of a rationale and foolishness are rarely obstacles for the irrational or the foolish. Hence this Experiment
Programs are generated to perform specific functions, may involve some interaction, and these actions may be customisable. Programming languages allow the developer to generate such applications and allow the creation of a diverse set of applications. But the ability to include user programmable scripting in these applications may actually be useful. So the objective to add scripting facility to scripted Perl applications, that is user customisable.
Perl appears to make this particularly easy in the way that easily handles document parsing and its rather flexible handling of data variables and subroutines through hashes. This module can be merely included in any Perl application, and adds a programmability feature to the application.
The goals would be to have a script interpreting system that
- Handles comments
- Handles code blocks and program flow control
- Allows user defined variables
- Allows user defined subroutines
- Extended using external modules
- Remains customisable.
The main initial goal was that of Robotic control. The scripting would allow quick generation of scripts that define robotic sensor and motor programmatically, abstracting out internal functions and interface code. In fact the program is based on GPIO scripting application made a few years ago called piGears in one of a series of projects for the Rasberry Pi. This had allowed quick and easy scripting of the IO of the device which could be configured in a number of different ways, depending on the project.
This is essentially a custom scripting tool for one specific device (the rPi) and a narrow domain (the IO). But such a tool may have a wider application if it could be customised easily. Hence this simple, integratable, modular programming language experiment...a module that allows end-user scripting, adaptable to diverse roles.
So what's so special?
So how is this different from any other programming language? Firstly it is an integrated into an application as an end-user facing language rather than a development language. Secondly it is modular and customisable offering functions specific to the application itself. Thirdly it isolates and abstracts system functions rather than allowing direct access to these, both for security, and also to reduce complexity for the end-user.
As a skeleton of a language, there is little to demonstrate what it can do, and less to discover what it needs to make it useful. For this reason, and because one of the main applications I hope to use it in is robotics, the first extension will involve the archetypal virtual robot...the Turtle (AKA Logo).
App::DB::Migrate DB Migrations Manager
You use the command line tool to setup the DB environment, generate migrations (pm files) and running/rolling back migrations, keeping track of them by id.
Generate a migration:
migrate generate -n my_migration_name
Run a migration:
You can have additional info in the GitHub repo README file.
I used this project to learn Perl myself and soon it became a big project. I'm looking for reviews and I need to learn about requirements needed to publish it to CPAN.
namespace::local Forget imports at end of scope, think namespace::clean inside-out
Say we need some utility functions inside a subroutine or scope, but we'd like to (a) keep them private and (b) keep them unavailable for the rest of the package.
This module would make imports (in fast, any symbol table changes) only available until end of scope.
namespace::clean does similar thing (and is cool!), but one must be careful to avoid erasing needed functions.
List::Unique::DeterministicOrder Store and access a list of keys using a deterministic order based on the sequence of insertions and deletions
Discussion section from the POD is below.
Any suggestions for a better name are appreciated.
The algorithm used is from https://stackoverflow.com/questions/5682218/data-structure-insert-remove-contains-get-random-element-all-at-o1/5684892#5684892
The algorithm used inserts keys at the end, but swaps keys around on deletion. Hence it is deterministic and repeatable, but only if the sequence of insertions and deletions is replicated exactly.
So why would one use this in the first place? The motivating use-case was a randomisation process where keys would be selected from a pool of keys, and sometimes inserted. e.g. the process might select and remove the 10th key, then the 257th, then insert a new key, followed by more selections and removals. The randomisations needed to produce the same results same for the same given PRNG sequence for reproducibility purposes.
Using a hash to store the data provides rapid access,
but getting the nth key requires the key list be generated
each time, and Perl's hashes do not provide their
keys in a deterministic
order across all versions and platforms.
Binary searches over sorted lists proved very effective for a while, but bottlenecks started to manifest when the data sets became much larger and the number of lists became both abundant and lengthy.
Since the order itself does not matter, only the ability to replicate it, this module was written.
One could also use Hash::Ordered, but it has the overhead of storing values, which are not needed here. I also wrote this module before I benchmarked against Hash::Ordered. Regardless, this module is faster for the example use-case described above - see the benchmarking results in bench.pl (which is part of this distribution). That said, some of the implementation details have been adapted/borrowed from Hash::Ordered.