April 2012 Archives

YAPC::NA 2013 Call For Venue

YAPC::NA 2012 in Madison, Wisconsin is just six weeks away. With this years conference upon us, it is time to plan for 2013. The call for venue is officially open! The TPF Conference Committee will be accepting bids today through June 1st 2012.

What is YAPC::NA?

YAPC::NA is an annual Perl-focused conference held at various locations throughout North America. The conference is a grassroots symposia on the Perl programming language promoted by The Perl Foundation.

What is the "Call for Venue"?

Each year Perl Mongers groups bid to host the conference for the upcoming year in the location of their choosing. The "Call for Venue" is The Perl Foundation's official invitation for groups to send in their bids.

How do you submit a bid?

The best place to start researching is the bidding details page at yapc.org. While there, you'll find links to the venue requirements and the review criteria. You can do a little more research and peek at previous bids by searching the 'yapc' tag at The Perl Foundation blog.

Also, feel free to post your questions to this blog post or email tpf-conferences (at) perl (dot) org. If you know a previous organizer, it might not be a bad idea to chat with he or she to get some advice and see if you are ready to host a conference.

If you think you'd like to submit a bid this year, let us know that you plan on getting one together by emailing tpf-conferences (at) perl (dot) org so that we know that there might be a bid coming and so that we can help you get the bid together by the deadline.

After your done and have your bid together, just email it to tpf-conferences (at) perl (dot) org. Remember, the deadline is June 1st 2012.

It's over a year since my last update here on Herbert Breunung's Perl 6 Tablets.

Work continues on this project, however, and tablets 2 to 4 on Perl 6's basic syntax, variables and operators have progressed.

The tablets have moved to their own site at http://tablets.perl6.org/ so please use this new address when referencing them. Herbert has written three blog posts about his recent work. His perl blog will keep us updated with his further progress. As always, I'm sure he would appreciate your feedback.

Joel Berger wrote:

After last month's breakneck development pace, I knew this month wouldn't be as gratifying, and indeed it turned into quite a slog.

This month involved lots of little bug fixes, posting dev releases to CPAN, then waiting for test results from CPANtesters. As a side note, there are a larger number of reports coming from Solaris and BSD than I would have expected. Sadly one of the bugs that still hasn't been sorted out is this recurring Solaris bug when changing working directory. It would appear that I am going to have to find a Solaris box or VirtualBox appliance, since waiting for test results for every fix attempt would take far too long.

I do have some nice things to report. I have received help from fellow WindyCity.pm member David Mertens and new contributor JT Palmer this month, so thanks guys! Next I have just pushed a new dev release which (again) changes the mechanism of dynamic loading; this one more reliant on DynaLoader's facilities rather than munging ENV variables. This seems to be more platform independent, or I should say, as platform independent as p5p has written into it. I have much faith in them! Preliminary results seem promising. David had been having problems with a candidate Alien:: module that he is writing and this release seems to have fixed it. I hope to see Darwin and Win32 start passing too (gasp).

With that said, I am still targeting more documentation and a basic testing framework before a 0.001 alpha release. Seems like I'm always saying this, but I hope this will be coming shortly.

Fork Alien::Base on Github and note that the 0.000_009 dev release is still contained in the dlopen branch. I will merge it once the tests seem to bear it out.

Original article at Joel Berger [blogs.perl.org].

David Golden reported:

Current progress

Three of the eight deliverables are complete:

  1. Publish a Chef cookbook for Perl interpreter deployment
  2. Publish a Chef cookbook for CPAN module deployment
  3. Publish a Chef cookbook for Plack application deployment

These have been uploaded to the Opscode Community Cookbooks site in two separate cookbook distributions:


The perlbrew cookbook satisfies deliverables #1 and #2. It provides several "lightweight resource providers" (LWRPs -- a Chef term) that use perlbrew to deliver various automation capabilities:
  • perlbrew_perl -- installs a perl from source to a user-configurable directory (by default, "/opt/perlbrew/perls/")
  • perlbrew_lib -- establishes a local::lib style directory for use with a particular perlbrew perl
  • perlbrew_cpanm -- installs CPAN modules into a perlbrew perl or local::lib
  • perlbrew_run -- runs arbitrary bash commands in the context of a particular perlbrew perl or local::lib

For example, this snippet of ruby could be used in a recipe to install a perl, configure a library, install some CPAN modules to the library, and then run a perl command using that perl and library:

include_recipe 'perlbrew'

perlbrew_lib '[email protected]'

perlbrew_cpanm "Stuff we need" do perlbrew '[email protected]' modules ['Data::UUID::MT', 'Data::GUID::Any'] end

perlbrewrun "print a UUID" do perlbrew '[email protected]' command "perl -MData::GUID::Any -le 'print Data::GUID::Any::guidas_string'" end


The carton cookbook satisfies deliverable #3. It provides an LWRP called 'carton_app' that uses carton to install application-specific dependencies into a application-local directory and then sets up the application as a system service using runit.

For example, this snippet of ruby could be used in a recipe to launch a carton-based application from a deployment directory. Note that most of the application-specific attributes are parameterized using Chef's 'node' data structure for each deployment node, making it easy to change how the application is deployed to different servers:

include_recipe 'carton'

cartonapp "hello-world" do perlbrew node['hello-world']['perlversion'] command "starman -p #{node['hello-world']['port']} app.psgi" cwd node['hello-world']['deploy_dir'] user node['hello-world']['user'] group node['hello-world']['group'] end

carton_app "hello-world" do action :start end

Current work and next steps

I have started refactoring the initial draft of pantry to make it easier to configure nodes without requiring users to edit JSON directly (deliverable #4). This will let me streamline the tutorials and help users following the tutorial to avoid introducing JSON errors and getting stuck.

Next month I expect to finish and publish the hello-world tutorial (deliverable #5) and have a (rough) draft of the "real world" tutorial (deliverable #6).

I was pleased to hear that my OSCON proposal was accepted and my Cooking Perl with Chef talk will be on Thursday, July 19. That presentation will satisfy deliverable #7 (though I may deliver a draft early as a test run). If I find out that my OSCON session will be videotaped, I'll use that as the video tutorial (deliverable #8). If not, I'll prepare a separate screen cast. (I may do both and use the short screencast as part of the talk to avoid the risk of a live demo.)

The North American "Yet Another Perl Conference" starts June 13th. Registration is open. All of the workshops have already sold out, but there are still tickets available for the general conference and the spouses program.

Also, if you want to a reserve a hotel room at the YAPC::NA 2012 conference facilities then you need to do so now. Reservations close on May 1.

See you at YAPC::NA!

Nicholas Clark writes:

A short report this month as less work got done, due to illness and disruption.

The single largest item of the month was spent getting my head around the state of cross compilation. Mostly this was figuring out the history and state of various cross compilation approaches in the core codebase, to help Jess Robinson refine a potential TPF grant application for cross compilation (particularly to Android). At the time of writing most of this is still groundwork (and intangible), such as studying her x-compile-android work at https://github.com/castaway/perl and the various existing *nix cross-compilation approaches in the core, but hopefully it should soon materialise into the actual published grant application, approval, and then visible code and progress. We think we can see an approach to take that has a good chance of working out long term, by being as simple as possible within the existing build setup.

I also diagnosed the likely cause of File::Glob failures we're seeing on some HP-UX machines. It looks like another C compiler bug handling bool. Pesky new fangled C99 and its complex new features. 12 years is clearly not long enough for compiler writers to be sure to have all the bugs squished. However, I've not had time to investigate further and be confident that my initial "top of my head" fix is the best long term plan, so this build failure is not yet resolved.

I spent some time starting to prune the core's todo list. Historically, the core's list of todos was recorded in the perltodo man page. This really isn't the best place for it, as installing snapshot that becomes increasingly out of date isn't that useful to anyone. So the 5.15.9 is the end of an era in this respect. The todo list is now in Porting/todo.pod, and the man page replaced with a small placeholder that points the reader to the git repository for the current version. Currently the version in blead is unchanged, but in a branch I've culled about 10% of the todos, those that are either done, mostly done, or no longer relevant. There are still a few to be updated where we have a better idea of how to do it, and the document needs restructuring to make it easier to spot appropriate "starter" tasks.

Whilst looking at some older bugs I needed to build various CPAN modules on OpenBSD against older versions of perl. Bootstrapping (older) CPAN didn't turn out be as easy it could have been, partly thanks to command line tools somehow managing to double gzip tarballs. The following approach might be useful to others in a similar situation on a recalcitrant setup:

1) manually get a tarball of the current CPAN.pm onto the machine and unpack it

2) Run its Makefile.PL. For now, don't worry about the missing pre-requisites warnings. Build it. (Run tests.) Install it.

3) Run the newly installed cpan client on the checked out tarball, as 'cpan .' If you need proxies for HTTP and FTP, set HTTP_PROXY and FTP_PROXY appropriately in your environment. Configure the CPAN client as prompted. This will fetch the missing pre-requisites, build and install them.

This approach is far more likely to succeed than trying to get the older version's installed cpan client to get a new CPAN.pm, as the older version doesn't incorporate the bug fixes and work arounds for awkward command line tools.

A more detailed breakdown summarised from the weekly reports. In these:

16 hex digits refer to commits in http://perl5.git.perl.org/perl.git
RT #... is a bug in https://rt.perl.org/rt3/
CPAN #... is a bug in https://rt.cpan.org/Public/
BBC is "bleadperl breaks CPAN" - Andreas König's test reports for CPAN modules
ID YYYYMMDD.### is an bug number in the old bug system. The RT # is given afterwards.*

0.755bec93bead1c1056 regression
0.75HP-UX build failure
1.75ID 20010303.010 (#5956)
0.25ID 20010929.014 (#7765)
0.50RT #111462
0.50RT #26001
1.00RT #36309
1.25RT #53200
0.50RT #55550
0.25RT #75780
0.50RT #79960
1.50Testing on OpenBSD
0.50checking accuracy of git tags
22.00cross compilation
6.25minor changes for 5.15.9
3.75process, scalability, mentoring
6.75pruning the todo list
pruning the todo list, Pod::HTML
17.75reading/responding to list mail
1.75smoke-me branches

73.00 hours total
* One can convert old system numbers to RT ticket numbers using the form at
https://rt.perl.org/perlbug/ The old system numbers are mostly a historical curiosity, but they can be useful when searching git logs and the mailing list archives.

Nicholas Clark writes:

Sorry for the delay in this report - I've been ill, been away, been ill while away, and generally disrupted, and I wasn't the only one ill in the house. Fortunately everyone is well again, and the backlog of other tasks is considerably reduced.

Zombie undead global backref destruction panics strike again. I don't know what it is with these critters, but another variant turned up, in almost the same place, and triggered by almost the same code. I think that this is the last one, simply because this was the third and final branch of the backref code, and now it's been diagnosed and fixed.

In this case, the problem is that it's possible for the the last (strong) reference to the target [tsv in the code in Perl_sv_del_backref()] to have become freed before the last thing holding a weak reference. If both survive longer than the backreferences array (the previous cause of problems), then when the referent's reference count drops to 0 and it is freed, it's not able to chase the backreferences, so those backreferences aren't NULLed.

For example, a CV holds a weak reference to its stash. If both the CV and the stash survive longer than the backreferences array, and the CV gets picked for the SvBREAK treatment first, and it turns out that the stash is only being kept alive because of an our variable in the pad of the CV, then midway during CV destruction the stash gets freed, but CvSTASH isn't set to NULL. It ends up pointing to the freed HV. Hence that pointer is chased into Perl_sv_del_backref(), but because it's pointing to a freed HV the relevant magic structure was no longer there to be found, a NULL pointer was assigned to a local variable. Subsequent code panicked because it thought that could never happen, at least not without a bug. Except, as the investigation showed, it could happen quite legitimately in exactly this scenario. During global destruction, all bets are off.

I don't believe that "better" destruction ordering is going to help here - during global destruction there's always going to be the chance that something goes out of order. We've tried to make it foolproof before, and it only resulted in evolutionary pressure on fools. Which made us look foolish for our hubris. :-(

I think that the reason that all these critters are shuffling towards us*now*, despite being in code that's quite long lived, is because since 5.14.0 Dave has re-worked the SV destruction code. Previously it would recurse into data structures, which had the unpleasant side effect of blowing the C stack when it tried to do too much at once. [Crash and burn - not good] Dave has made much of that code iterative now, which avoids the crashing. [Generally this is seen as progress :-)] However, it's changed the destruction order, and I think in some cases that is exposing long-latent bugs elsewhere in the code that destruction calls, particularly during global destruction.

Another signficant activity in February week was diagnosing and resolving ticket #37033. The low bug number reveals that this is quite a long standing bug, related to the parser leaving file handles open in some cases. The background is that the perl interpreter is either passed a filename, or if none is passed defaults to reading from STDIN. If reading from STDIN, the interpreter should not close it, as it did not open it (and implicitly closing STDIN out from underneath a program is bad). But any file handle the interpreter opened (internally, as a side effect of doing its job) should be closed.

The problem was the the check for close-or-not used to be simply "is this filehandle STDIN?"

The troubles arise from the interaction of the definition of STDIN with the POSIX semantics of open. "is this STDIN?" is pretty much "is this file handle open on file descriptor 0?". open gives you the lowest unused file descriptor. So if the user program closes STDIN, then the next file handle that is opened meets the "is this STDIN?" test. And the parser was mistaking this for the situation where it had defaulted to reading the program from STDIN because no filename was passed in, so it was leaving the file handle open. Of course, this is a leak. However, it was also starting to show up as visibly buggy behaviour in code that manipulated STDIN - close STDIN, do some "innocent" calculation, (re)open STDIN, only it's not on file descriptor 0 now - what's wrong?

What changed? Karl's work in improving Unicode semantics means that several ops now need look up certain Unicode properties in some cases. If these aren't cached yet, there's a call back into the parser as part of the loading routines. And if that call into the parser happened while STDIN was closed, oh dear, come close time the parser used to thing that had been defaulting to reading from STDIN, and left the handle open.

The parser now explicitly tracks what it opened, and hence closes everything correctly.

A side effect of tracking all this down was that I started to look at the code in perl.c that handles the parsing of the command line options and the initialisation they trigger, along with the the handling of the "script file name", the "-e" option and the default to STDIN. Some of the cleanup has been committed to blead and will be in 5.16.0. Other parts are in the branches nicholas/RT37033-followup and smoke-me/kick-FAKE_BIT_BUCKET However it remains a work in progress - right now perl needs to open /dev/null as part of -e handling, because of assumptions in the parser. However, I think I can see roughly how to make a small change to the source filter code that would permit -e to avoid needing to open any file handle.

installhtml also forced it way onto my plate again in February. This was mysteriously passing its tests on George Greer's Win32 setup that smokes "smoke-me" branches, and yet failing one test on George Greer's Win32 setup that smokes blead. Same operating system, same code, same everything, so why isn't it the same result? After quite a lot of head scratching, code chasing (you are in a twisty maze of abstraction layers, all alike), it turns out that the problem is that on one VM, the smoker is configured to smoke at a pathname D:\path\to\smoker and on the other VM its configured to smoke at d:\path\to\smoker. That difference is almost irrelevant, because filenames on Win32 are* case preserving, case insensitive, and most things aren't concerned with which name a file has, merely whether they can open (or delete) it.

However, Pod::Html has an explicit white-box test for its caching. That test needs to be sure that the right pathnames are in the cache index. And it turns out that the pathnames written to the cache are canonicalised by File::Spec, whilst the expectation list built by the test is not. And that whilst files and directories on Win32 are case preserving, and hence unchanged on canonicalisation, drive letters are canonicalised to upper case. Yay.

I wonder what useful stuff gets displaced from my brain by this obscure knowledge? I'm sure I used to be able to remember the lyrics to American Pie... :-(

To finish the month, I finished a chunk of work related to a regression introduced by commit 6634bb9d0ed117be. A side effect of the improvements in that change was a small regression in how code such as this would deparse:

use 5.10.0;
say "Perl rules";

It should deparse as C - instead it started to be deparsed as C. Of course nothing in life (or at least in perl 5) is simple, as verifying that this is fixed turned out to be something the B::Deparse tests couldn't actually do - they were structured in such a way that they effectively ran with an implicit C at the top level. So first the B::Deparse test infrastructure had to be refactored to remove that, and tests that were relying on it fixed.

A more detailed breakdown summarised from the weekly reports. In these:

16 hex digits refer to commits in http://perl5.git.perl.org/perl.git
RT #... is a bug in https://rt.perl.org/rt3/
CPAN #... is a bug in https://rt.cpan.org/Public/
BBC is "bleadperl breaks CPAN" - Andreas König's test reports for CPAN modules

7.25B::Deparse CORE::say regression
1.00B::Deparse refactoring
0.25CPAN #71139
Pod::Functions (and then Carp-related pre-req fail)
0.75RT #109726
0.75RT #109828
0.50RT #110078
0.50RT #110248
0.25RT #110736
1.00RT #27392
0.50RT #36459
11.25RT #37033
RT #37033 related tidy up
RT #37033, RT #111070
0.50RT #61754
1.00SVf and HEKf
8.50another backref panic
bisect.pl (usability improvements)
2.50clang warnings
0.50cross compiling
0.25defined @::array
0.50defined and exists
3.25perl.c init functions
8.25process, scalability, mentoring
46.25reading/responding to list mail
0.25smoke failure
1.25version 0.96

139.50 hours total

*generally. My understanding is that NTFS can be case sensitive, and can do hardlinks, but that most Win32 code would have kittens if it were run with these settings. Likewise, *nix can quite happily mount case insensitive file systems, and that's going to be breaking some code too...

Dave Mitchell writes:

As per my grant conditions, here is a report for the March period.

Spent the month successfully bringing run-time regex code blocks into the new way of doing things.

My previous work on this had handled the cases where the code appears within a literal regex, e.g.

$r = qr/xy(?{})/;

In all the above cases, the code block is parsed and compiled by the perl parser at the same time as the surrounding code is compiled; the compiled blocks are then retained and passed to the regex engine when the pattern is compiled, and preserved when for example a qr is interpolated within another pattern, as above.

My changes this month extend this to run-time patterns, e.g.

use re 'eval';
my $code = '(?{})'; /$code/;
$foo =~ '(?{})'; # this is a string literal, not a regex literal

It does this by making a copy of the pattern string, but with any literal code blocks blanked out and \ and ' escaped; it then calls eval_sv() on an SV containing the string
qr'pattern with \\,\' escapes and literal (?{}) blanked out' then any compiled code blocks are extracted from the qr and added to the pool of code blocks (e.g. from literal code blocks already compiled).

Having reached this point, it means I could then remove all the old-style re_eval compiling code in the core, including the hated sv_compile_2op() function, and considerably simplifying S_doeval().

There are three more things I still need to do: the main one is to fix the way code blocks are invoked, chiefly to give them a proper entry on the context stack so that things like die,next,and caller don't crash, but also to make recursion work properly. The other two things are to do a big list of miscellaneous fixes and tweaks that I've been noting as I go along; and finally, to check that all the tickets attached to the meta-ticket have in fact been fixed by these changes.

Over the last month I have averaged 16 hours per week.

As of 2012/03/31: since the beginning of the grant:

108.4 weeks
1139.9 total hours
10.5 average hours per week

There are now 160 hours left on the grant.

Report for period 2012/03/01 to 2012/03/31 inclusive


Effort (HH::MM):

0:00 diagnosing bugs
69:10 fixing bugs
0:00 reviewing other people's bug fixes
0:00 reviewing ticket histories
0:00 review the ticket queue (triage)
69:10 Total

Numbers of tickets closed:

1 tickets closed that have been worked on
0 tickets closed related to bugs that have been fixed
0 tickets closed that were reviewed but not worked on (triage)
1 Total

Short Detail

67:40 [perl #34161] METABUG - (?{...}) and (??{...}) regexp issues
1:30 [perl #111974] 5.15.9 breaks Glib

It is that time of the year, again, and here follows the usual message. I am sorry for not being more creative :)

The Perl Foundation is looking at giving some grants ranging from $500 to $2000 in May 2012.

You don't have to have a large, complex, or lengthy project. You don't even have to be a Perl master or guru. If you have a good idea and the means and ability to accomplish it, we want to hear from you!

Do you have something that could benefit the Perl community but just need that little extra help? Submit a grant proposal until the end of April.

As a general rule, a properly formatted grant proposal is more likely to be approved if it meets the following criteria

  • It has widespread benefit to the Perl community or a large segment of it.
  • We have reasons to believe that you can accomplish your goals.
  • We can afford it (please, respect the limits or your proposal should be rejected immediately).

To submit a proposal see the guidelines at http://www.perlfoundation.org/how_to_write_a_proposal and TPF GC current rules of operation at http://www.perlfoundation.org/rules_of_operation. Then send your proposal to [email protected] Your submission should be properly formatted accordingly with our POD template.

Proposals will be made available publicly (on this blog) for public discussion, as was done in the previous rounds. If your proposal should not be made public,
please make this clear in your proposal and provide a reason.

We have received the following grant application from Paul Johnson.

Before we vote on this proposal we would like to get feedback and endorsements from the Perl community. Please leave feedback in the comments or send email with your comments to karen at perlfoundation.org.

Name: Paul Johnson

Project Title: Improving Devel::Cover


In the past few months Booking.com has donated €100,000 to The Perl Foundation to aid the further development of the Perl 5 programming language and the craigslist Charitable Fund has donated $100,000 towards Perl 5 maintenance and for general use by the Perl Foundation.

I'd like to apply for a grant to improve Devel::Cover modelled on the successful grants currently in progress wherein David Mitchell and Nicholas Clark are working on improving the Perl 5 core.

My work situation is currently evolving from one in which I have a single full-time job to one in which I may have a number of smaller concurrent projects to work on, and so I would be able to make space to work on Devel::Cover for a day or two per week, or perhaps even more intensely for short periods. This grant would facilitate that.

Benefits to Perl 5:

Devel::Cover is the Perl code coverage tool. Perl is noted for its QA and testing focus, and I like to think that Devel:Cover is an important part of that. It is one of the more fully featured and unobtrusive coverage tools of any software language. In a similar fashion to the Perl core, for the most part it just works.

However, Devel::Cover is now a little over ten years old and for the majority of that time it would be safe to say that it has been primarily in maintenance mode. This is mainly because as the primary developer I have been unable to put much time into the module, and in particular I have not been able to give the kind of sustained effort required to solve some of the more tricky bugs or to add new features. I'd like to rectify that and ensure that Devel::Cover remains useful, relevant and one of the best coverage tools available for any language.

Project Details:

There are three main areas that are in need of attention: bugs, features and ease of development.

Bugs: There are currently 70 bugs on RT and various others that I have received by mail, that have been reported in other ways, or that I have found myself. Some of these are many years old because they are the sort of tricky problem which can't be solved in one evening after work.

Features: New features fall into two areas. The first of these overlaps with bugs somewhat. As Perl progresses, ingenious developers have created modules which bend and mold Perl's syntax and semantics, sometimes in ways which mean that Devel::Cover can no longer provide coverage.

Dancer falls into this category, as does MooseX::Declare and probably everything which uses Devel::Declare. Quite likely there are other modules in this category and as some of these modules become more popular the lack of coverage data becomes more apparent and may become an impediment to their continued uptake.

The challenge here is to recognise how these modules extend Perl's syntax and semantics, to collect coverage, and to map the results back to the enhanced syntax or altered semantics in a way which allows the developer to understand what the results mean. It's possible, and even likely, that the best way to accomplish this, in some cases, will be to improve or extend perl's core API.

The second area relates to brand new features. For example, as testing becomes a standard practice some code now has extensive test suites, and these can take a long time to execute. This leads to a natural reluctance to run the test suite as often as might be desirable. Devel::Cover could assist here by providing an optimum test ordering, or by reporting which tests exercise given changes.

Building on the same foundation as this work comes mutation coverage. This is the idea that making any functional change to your program should cause some test to fail. Coverage comes into the picture here because this useful technique becomes far too expensive unless you only execute tests which exercise the mutation.

And then there are new coverage criteria, such as path coverage or regular expression coverage. As far as I am aware, no coverage tool for any language provides regular expression coverage.

The Devel::Cover TODO list contains many more ideas for improvements in different areas.

Ease of Development: Many people have contributed to Devel::Cover, for which I am very grateful. But even during times when I have been relatively inactive there hasn't been sustained, substantial input from others. There are probably as many reasons for this as there are potential contributors, and I'm certainly not complaining about it, but I do want to ensure that anyone who is interested in developing Devel::Cover encounters as few difficulties as possible in doing so. For the overall health of the project it would be very nice if there were a few people who were in a position to take over maintenance if necessary. I want to remove roadblocks to this goal.

Deliverable Elements:

I propose to adopt a similar model to the successful grants currently in progress wherein David Mitchell and Nicholas Clark are working on improving the Perl 5 core. In those grants there are intentionally no pre-defined deliverables for the projects because the nature of the work does not lend itself such an arrangement.

I intend to devote 400 hours to work on improving Devel::Cover, paid by the hour at the same below-commercial rate as Dave and Nick. In a similar manner to them, I would post a weekly summary on the perl-qa mailing list detailing activity for the week, allowing the grant managers and anyone else who is interested to verify that the claimed hours tally with actual activity, and thus allow early flagging of any concerns. Missing two weekly reports in a row without prior notice would be grounds for one of my grant managers to terminate this grant.

Exactly as Dave and Nick do, once per calendar month I would claim an amount equal to $50 x hours worked. I would issue a report similar to the weekly ones, but summarising the whole month. The report would need to be signed off by one of the grant managers before I get paid. Note that this means I am paid entirely in arrears.

At the time of my final claim, I would also produce a report summarising the activity across the whole project period.

Also, (the "nuclear option"), the grant managers would be allowed, at any time, to inform the board that in their opinion the project is failing, and the TPF board may then, after allowing me to present my side of things, decide whether to terminate the project at that point (i.e. to not pay me for any hours worked after I was first informed that a manager had "raised the alarm").

The specific tasks mentioned above are examples of some of the things I'd like to get done. By their nature it's difficult to estimate the amount of effort required, hence the set-up of this grant.

This model has worked well for core development. This grant can be viewed as something of a proof of concept too, to see whether the model can be extended to modules which do not ship with the core. In this case Devel::Cover has a very close relationship with the core, and could be considered to have similar characteristics and problems. It is now fairly mature, it basically just works, it has a fair number of old bugs which haven't been fixed because they are "hard", relatively few people are actively working on it, and new features, which are necessary to keep it useful and relevant, need to be carefully worked into old, stable code. As such, this grant may be well suited to test this model.

Project Schedule:

I expect that I can deliver 400 hours of work in approximately four or five months. If I do not manage to do so, I will continue work on the grant unless the grant managers decide that I shouldn't. If circumstances permit, I may be able to finish earlier.

As described above in "deliverable elements", specific tasks in "project details" are examples of the types of things that I intend to work on. I certainly won't be able to complete everything on the TODO list during this grant, and it may turn out that other tasks are a better use of TPF's money in my opinion, and that of my grant managers and the Perl QA group.

I am available to start work on this project immediately.


I wrote Devel::Cover and have maintained it for over ten years. I have also written commercial code coverage tools. I've been using Perl for almost as long as Perl has been around. Git tells me that my first core patch was applied 14 years ago. I've spoken at eight YAPC::EU conferences. I helped to lead The Perl Foundation's presence in the Google Code-in programme 2011/2012.

Amount Requested: $20,000

Endorsed by: Nicholas Clark, Ricardo Signes, Florian Ragwitz

Suggestions for Grant Manager: Florian Ragwitz & Ricardo Signes

About TPF

The Perl Foundation - supporting the Perl community since 2000. Find out more at www.perlfoundation.org.

About this Archive

This page is an archive of entries from April 2012 listed from newest to oldest.

March 2012 is the previous archive.

May 2012 is the next archive.

Find recent content on the main index or look in the archives to find all content.


OpenID accepted here Learn more about OpenID
Powered by Movable Type 6.2.2