Blog Closed

This blog has moved to Github. This page will not be updated and is not open for comments. Please go to the new site for updated content.

Tuesday, December 29, 2009

Great Summaries

It's been quite a busy week for me, and my activity level shows no signs of decreasing again until the new year. So, no substantial blogging from me for a little while.

In my absence, I strongly suggest everybody check out Lithos' blog. He's been posting excellent weekly summaries of Parrot and Perl6 activity. It's been a great resource for me when I've fallen out of the loop.

Saturday, December 19, 2009

Rethinking Matixy Assignment

Variadic output arguments have been partially implemented in Matrixy in the "varargout" branch, but I'm starting to hit a wall with the current implementation. At the very least, it's not a clean or well-designed implementation of it. The parser is using all sorts of weird global variables and there are too many special cases to deal with.

A big portion of the issue is the problem with ambiguity between a matrix index and a function call. This statement can be either a call to function foo, or an index into matrix foo:

a = foo(b, c);

And, functions (but not matrices) can return multiple values:

[a, b(3), c(4, 5)] = foo(x, y);

So when we parse the first set of values, we don't know until we've parsed the equals-sign if it's this:

[a, b(3), c(4, 5)];

which is shorthand for this:

ans = [a, b(3), c(4, 5)];

But, I've harped on these problems all before, and I won't dig into it in depth here. What I think we need to do in Matrixy to bring some sanity back to the parser, is to create a matrix reference object that can be used to retrieve or modify values in a matrix object comprised of a combination of constant and variable references. So this line:

x = [a, b(3), c(4, 5)];

becomes this PIR code sequence:

$P0 = !index_ref(a)
$P1 = !index_ref(b, 3)
$P2 = !index_ref(c, 4, 5)
$P3 = !build_literal_matrix($P0, $P1, $P2)
$P4 = !index_ref(x)

And this:

[a, b(3), c(4, 5)] = foo()

Becomes this:

$P0 = !index_ref(a)
$P1 = !index_ref(b, 3)
$P2 = !index_ref(c, 4, 5)
$P3 = !matrix_accessor($P0, $P1, $P2)
$P4 = !index_ref(foo)

Of course these are just first drafts of the call sequences and the method names, but it does illustrate the general idea. This is going to add a pretty significant number of function calls and created PMCs to the normal control flow too, but I can worry about optimizing things once they all work.

When we attempt to pull a value from a reference object, we'll determine whether it's a matrix or a function call and handle it accordingly. If we're pushing a value to it as in an assignment, we always just assume it's a matrix. M, to the best of my knowledge, doesn't support L-value function calls.

Soon I'm going to merge the varargout branch where I have been working on variadic output arguments. Even though I know this isn't the approach I will be taking in the long term this branch does include a number of important regression tests and other refactors and enhancements that I don't want to lose. I might not end up getting to this issue until after Parrot's 2.0 release, however.

Wednesday, December 16, 2009

Parrot 1.9.0

Parrot 1.9.0 "Blue-Fronted Amazon" was released yesterday, courtesy of Gerd Pokorra. As always, Parrot showed up on time and under budget, a rarity in the software world. This is the last development release before the big 2.0 milestone on 19 January 2010. chromatic will be the release manager for 2.0, followed by darbelo (2.1, 16 Feb) and then cotto (2.2, 16 March). Probably around January I will put out a call for release managers in April and May too.

1.9.0 was a relatively conservative release in terms of entries in the changelog. Comparing the release announcement for the past month doesn't show the same number of high-profile projects that previous months had. Part of the reason for this is the ongoing focus on testing and optimization. These things are great for the project but tend not to pad the changelog too much. I also think that many of our core developers are starting to focus energy on other ecosystem-related projects: compilers, libraries, and utilities. After all, Parrot is an interesting project by itself but all the various projects that it enables are even more so, which is a major draw for our developers. Success begets success, so the rapid proliferation of these side projects are just as important to the overall success of Parrot as a whole.

I expect 2.0 to be similarly conservative with much of the development team focusing on fixing bugs, improving documentation, and expanding test coverage. There are some big changes brewing that could land before 2.0, however, so it might turn out to be a big release indeed.

Monday, December 14, 2009

Parrot Developer Meeting Yesterday

Yesterday was the big online Parrot Developer Meeting, which I mentioned briefly last week. The idea was to have a meeting similar to the large Parrot Developer Summit from 2008 to reevaluate our long-term development roadmap and make sure we were on the right path for 2.0 and beyond. I had been one of the more vocal people saying that our roadmap up to that point was incomplete, outdated, and not reflective of our recent development priorities, so I'm particularly happy that this meeting was held.

It was a very productive meeting too, although due to time constraints and internet connectivity issues I wasn't able to participate as actively as I wanted. The meeting started with a 30-minute project retrospective lead by chromatic, a statement of our short-term project goals, some discussion of changes to the support policy, and then an item-by-item review of some of our existing and new roadmap items.

Short-Term Goals

Rakudo* ("Rakudo Star") is due shortly after Parrot's 2.3 release, and is going to be a major milestone for both projects. James Keenan suggested, to much agreement, that Parrot should be focusing almost single-mindedly on the needs of Rakudo* between now and then to help make that release as successful as possible. The desired return on investiment is that the success of Rakudo* will help to spur on increased interest from new and existing HLL developers, and demonstrate the Parrot is a viable platform to host these compiler projects.

Among the major needs of Rakudo* are improved Parrot performance. Specifically, optimizations for PCC and an overhaul of the GC were both mentioned as paths towards this goal.

Support Policy

Several days ago I sent an email to the list proposing that we rewrite our support policy to alleviate some of the problems people have been having with our long deprecation cycles. While these long deprecation cycles are good for stability and developer piece-of-mind, it has been a strong negative influence on development speed. Top this off with the fact that many of our systems are still immature and in need of major overhaul, and we run into some serious problems.

My suggestion, while a catalyst for discussion, was not accepted. What we have decided to do in it's stead is to shorten our deprecation cycle by adding more supported releases. Parrot will now have a supported release every 3 months instead of every 6. So next year we will support releases 2.0, 2.3, 2.6 and 2.9. Hopefully this improved turnaround time will alleviate some of our issues and increase the speed of development.


We did a complete item-by-item review of our roadmap items, and assigned specific releases when we would like to have each feature in place. Here is a quick listing of some of the major items on this list:

  • Improved GC (2.3)
  • Fixed L-Value semantics (2.6)
  • Overhaul NCI (2.6)
  • Asynchronous IO (2.9)
  • Concurrency (3.0)
  • Strings overhaul, including immutable strings (3.0)
  • Lorito (3.0)
  • PIRC (3.0)
  • JIT (3.3)
I personally suspect that some of these numbers will be rearranged in practice (AIO and Concurrency are going to go closely together, I predict Strings will become a pain-point sooner than 3.0, PIRC is going to be affected by Lorito in unforseen ways, etc), but overall it's a decent and well thought-out list, and I won't nitpick it. We would be lucky if we could get even half of these things done before 3.3, but I hold out hope that we could do better than that. I am especially hopeful considering the way our development team is steadily growing.

So that's my quick recap of the developers meeting. I'm planning to read back over the transcripts, and I'll write more posts about any topics that I find to be of particular interest.

Thursday, December 10, 2009

Parrot Developer Meeting

I should have blogged about this on Tuesday when it was decided. On Sunday, 13 December 2009, we are going to have a virtual Parrot design meeting.

3:30 EST

13 December 2009


We are going to talk about the long-term development roadmap and hopefully explicitly lay out our priorities for the forseeable future. I personally would like to see us map out from now through 3.0, but I would be happy if we could just map through 2.6.

Matrixy Progress

I've been doing a lot of work on Matrixy lately. I find that recently I've been able to do a lot on that project in small bits, which is great when I want to touch it during a lunch break or between baby maintenance. I've certainly been able to work more on Matrixy this week then I have been able to write blog posts, for instance. In recent days I've:
  1. Done major cleanup and expansion of the test suite
  2. Added a bunch of builtins, including some new parrot-primitive functions that will allow me to write more functions in M and to possibly migrate some PIR-based builtins to M.
  3. Refactored and cleaned up dispatch
  4. Added Cell Array support
  5. Added proper nargin/varargin support
  6. Created a new branch to begin adding proper nargout/varargout support
It's the last item on the list that's been giving me a bunch of trouble recently, and the inherent difficulty of the task is probably the reason why I haven't gotten it working prior to now. The work I have been doing so far is mostly hackery, trying to add nargout and varargout without having to rewrite the entire grammar and dispatching mechanism . Of course, in the long run I am going to have to rewrite these things, but I'm just not ready to do that yet. I would rather have a proof of concept and some passing tests than nothing.

The problem with dispatch, or anything in M, is that everything is ambiguous until runtime. The same syntax, "X(1)" could be used to refer to the first element of the matrix X, the first element of the cell array X, or a call to the function X with an argument "1". This is further complicated by the fact that variables overshadow functions of the same name, but do not overwrite them completely. If we have a variable and a function named X, we can still access the function version using the feval builtin. In M we can also call functions without using parenthesis at all, so it isn't only the case that postfix parenthesis create ambiguity, almost every single identifier lookup requires a runtime check.

I've talked about all these syntax issues before, and won't dwell on them now. There are also semantic issues that need attention. Let's look at the case of nargout for instance.

function x, y, z = getcoords()

[x, y, z, w] = getcoords()

In this code snippet above, the "getcoords" function is called with 4 output arguments, but the definition of that function only provides for three. If "getcoords" doesn't explicitly check the number of outputs expected and throw an error, this assignment will proceed without a problem. x, y, and z will get the expected values in the caller context, and the w variable will simply be left undefined.

So what we have is really a fundamental disconnect between caller and callee. The callee can see how it was called by checking nargin and nargout variables, and can choose to error if those numbers do not match what it wants. A function can return a different number of values then the caller expects, too. So if I just did a call to:


nargout here would be 0, but the function could still return 3 values which would be stored in the global default variable "ans". Yesterday I started a refactor to make this possible, by trying to break assignments up into two parts: The callee returning an arbitrary array of values and the caller having to explicitly unpack those values. It's gotten me through a number of important test cases, although it is obviously not a great or pretty solution.

[a, b, c];

is an R-value, and the generated matrix is stored in the default variable "ans". However,

[a, b, c] = foo()

is obviously an L-value, and I need to be doing some bookkeeping to keep track of the number of arguments so I can populate nargout in the call to foo (if foo is a function call, of course). So I create a global variable to store the L-values in the assignment so when I generate the actual assignment call I have access to that number. One problem I ran into yesterday though is that when a rule fails and we have to backtrack, we end up with these global variables in an unconsistent state. So the call:


doesn't have any L-values, and when I parse the function call I can't expect the global variable to exist. Likewise, when I parse:

[a, b, c];

I need to keep count of the L-values, even though this isn't an assignment. So yesterday I ran into the problem:

[a, b, c];
foo(); # Thinks nargout = 3

Fun, eh?

I'm not even entirely certain how I'm going to do all this right. Do I create a custom CallSignature subclass, and handle argument passing myself? This has the nice benefit that I can almost always treat "x(1)" as a function call, whether it's an actual function or an internal indexing function. The more I can abstract away the differences, the better. The "almost" in the previous sentence of course refers to "x(1) =" L-values, which would need to be indexed a little differently from a normal function call. And since I need to be manipulating indices before passing them to the PMC, I need to be calling a function to handle indexed assignments anyway.

It's all going to be a little tricky to get past this roadblock and to do it in a way that I find acceptable. However, Matrixy has good momentum right now and has a lot of great features already, so I'm hoping I don't get mired down for too long.

Friday, December 4, 2009

Pure-Parrot Testing

I had mentioned the nqpTAP project that dukeleto started a while back to provide a pure-Parrot testing harness. A few days ago he completely rewrote the project and re-released it as Tapir (a clever portmanteu of "TAP" and "PIR"). The test harness is written in PIR instead of NQP now, is more robust, and is self-testable. Dukeleto also apparently has plans to make it more configurable and maybe even pluggable, things which I am very excited about.

I would like to migrate Parrot-Linear-Algebra and Matrixy to use the new harness, and I'm sure other projects would like that as well. These two might make cool test cases. I'll post details when I have a good procedure for doing that.

This got me thinking more about a project I've been incubating in the back of my head for a while: I've been wanting to have a mock object testing framework for Parrot, and I think it would be reasonably easy to make one. So I'm going to draft out some of my ideas here.

First thing we want is the actual mock object type. Call it "MockObject" for short. This object type, once initialized with a number of options and restrictions, should act exactly like a normal object. It should provides methods, respond to VTABLE calls, and do all sorts of things that a normal object of the given type would do. The difference, of course, is that MockObject is just pretending.

Pretend I have a large system that needs to be tested. I pass request objects to it, and the system in turn accesses properties and methods in that request. I want to test and verify that my system is calling the correct methods with the correct arguments, and is accessing values in the proper order. To do this, I need to create a MockObject, and pass that object to my system to verify that things are happening correctly.

So a MockObject needs to respond to certain behaviors:
  1. Must be able to respond to method calls (including handling calls to methods which should not exist), being able to expect and verify parameter lists.
  2. Must be able to respond to vtable calls, being able to expect parameter values.
  3. Must be able to log method calls, vtable calls, and property accesses to verify order
A MockObject needs to act exactly like the kind of object it is impersonating, so it really can't have a public interface of it's own. Any interface that MockObject implements for itself is going to be unusable for testing, and even if we keep that interface small there is always going to be that overlap where we can't test. To avoid this, we need to be able to configure MockObject without having any visible interface. Sounds rough, eh?

So here's what I'm thinking. First, we have a MockManager class that contains the configuration methods for MockObject. To configure a MockObject, we don't call methods directly on it, we instead pass it to methods on the MockManager class. This saves us from overlapping interfaces in PIR-land. Second, we need to provide two interfaces: the "normal" PIR interface that perfectly imitates the target type, and the internal interface that MockManager uses for configuration. At the C level, we can have two VTABLE structures, which we swap out manually when doing configuration.

So without any further adeu, I would like to show some PIR code that demonstrates how I think MockObject could be used in PIR code as part of a normal test:

.local pmc mock, manager
.local int result
manager = new ['MockManager']
mock = manager.'new_mock'('ResizablePMCArray')
manager.'expect'(mock, 'sort')
result = manager.'success'(mock)
ok(result, "the sort method was called!")

So we create a MockObject that is supposed to act like a ResizablePMCArray. We setup the expectation that the sort method is going to be called with no arguments. After we've called that method, we check to see that all our expectations were met. This test above should pass.

There are obviously a lot of issues raised by this potential implementation and a lot of questions that still need to be addressed before we can use any of this. However, I do think this would be a great project and very usable for a number of projects. I would definitely love to hear what other people think about it as well.

Thursday, December 3, 2009

Computer Status and Virtualization

I mentioned a little while back that I was having computer problems. This is saying it mildly: After upgrading to Ubuntu 9.10 from 9.04, my system became completely unstable and mostly unusable. I spend several evenings rebooting my computer over and over because it was frozen up. I filed a bug with Ubuntu, received no fixes (though I did eventually get confirmation from another user that he was seeing a similar issue), and finally rolled back to 9.04.

When I say "rolled back", I don't mean that I had previously taken a faithful system backup image and was able to quickly and easily jump back to that without losing any data. No. I pulled out my external harddrive and began the arduous task of trying to scavenge all the data I should have been backing up regularly. It took me an entire evening because several times the computer froze in mid file transfer and I had to reboot, delete the partial file fragments and start over. I'm still not 100% certain that all the files I backed up are sane and faithful.

As much as I would like to give a stupid grin and say "Golly, I've sure learned my lesson!" I'll probably run into this same mess next time I try to upgrade my OS too. Even a monkey can be taught.

Recently I've been playing with VirtualBox on my work computer. Basically, I was trying to get access to a Linux environment from my work computer without having to deal with Cygwin. Say what you want to about the merits of Cygwin, I've simply never liked using it. Besides just wanting access to a real Linux environment, I also wanted to play around with new OSes. I really liked Ubuntu by 9.04 but my experience with 9.10 had me pretty sour for a little while, and I wanted to entertain a few more options first.

So I set up VirtualBox on my work computer, installed a few new Guest OSes, and things have worked like a charm. I tried Fedora 12, which was pretty cool. I hadn't used Fedora since version 7, and it's come a long way since then. I also tried OpenSolaris; it was nice, but it was hard to differentiate it from a Linux distro on my virtual platform and I was also having some strange stability problems with Xorg. I also tried FreeBSD and OpenBSD, but was inable to get either of them installed and running. If anybody knows the trick to getting a BSD variant installed on my VirtualBox, I would love to hear about it.

Once I got my personal computer back online, I decided to install VirtualBox here as well. I could test out a bunch of other systems, and maybe even get Parrot building and testing in those places too.

At least, that was the theory. I haven't really been able to get anything working on here as easily as I was able to get them running on my work computer. I even tried to get a virtual Ubuntu 9.10 installation set up so I could start doing some testing on it before I became brave enough to make the upgrade again, but wasn't able to get that working either. One of the big issues I'm running into is that my system doesn't support hardware level virtualization support, which is necessary for virtualizing 64-bit systems. This is a total bummer and I've become very unhappy with my computer since learning about that drawback. It's only been about a year though, so it's hard to justify buying a new replacement computer. Maybe I can look into it for next christmas.

I'm going to start compiling the things I've learned about VirtualBox and maybe write a blog post or two about using it to setup virtual test environments for Parrot. Could be very helpful for expanding our coverage on less-popular systems. I've already managed to post a smoke report for a platform that I cannot find any record of Parrot having a report for (OpenSolaris on i386), so that's a nice little bonus. There were a few failures in that report too, so maybe I can learn enough about the system to get those fixed. And everybody wins.

Tuesday, December 1, 2009

GC Gets Kick Start

Parrot's garbage collector is starting to really get a lions share of developer attention recently, especially after some very interesting benchmark statistics from chromatic went public. For the benchmark of building the new NQP-RX project, a whopping 80% of execution time is spend in the GC mark phase. Actually, the real statistic is that 80% of the execution time was spend only in the Capture PMC's mark routine (and the functions that it calls). That's quite a huge amount of time, even for a naive GC like ours to spend.

Let's take a quick recap about GCs, and what causes them to be so expensive: GC is used to find and automatically reclaim unused data items so that the storage space can be reused. A good GC system means that the system programmer does not need to manually free memory when it is done being used: The GC will detect and automatically deallocate the unused memory. A good GC system, in essence, will completely eliminate memory leaks, and help to make the codebase much more clean and succinct. To do this, GC needs to first find all unused ("dead") objects and then free them, two phases known as mark and sweep.

In a naive implementation of a mark and sweep algorithm, there are two aptly named phases: mark and sweep. The mark phase is charged with finding dead objects to reclaim. We typically do this in a reverse order, by first finding all objects that are in current use ("alive"), and then declaring all other objects to be reclaimable (or already free). Starting from a root set, such as the register sets and interpreter globals in Parrot, we can construct a graph of all objects by following pointers and marking each reached object as alive. It stands to reason that if there is no pointer to a particular object, it cannot be accessed, and anything that we do not access during the mark is presumed unreachable, which is the same as dead.

In the sweep phase, we must iterate over the pool of all objects, finding objects marked dead and freeing them. Freeing an object typically involves calling a custom destructor if one is provided, and making the memory available to the allocator so the memory can be reused the next time an allocation is made.

In every mark and sweep GC collection run for a naive collector, we must first trace the entire memory graph and then iterate over the entire object pool. This is very expensive, and the expense grows large as the memory use of the program grows large. What we need for Parrot is something a little bit less naive.

What we probably can not do is make huge conceptual improvements to the idea of mark and sweep: We will always need to detect alive objects, and we will always need to traverse and free the dead ones. The general idea is sound and that's not something we want to change. What we can do, however, is to impose heuristics on the system to decrease the number of items to mark and decrease the number of objects to sweep. This is where the bulk of GC performance improvements can be made, by being much smarter about how the GC is used.

Allison sent a nice email to the list the other day essentially saying that GC has become an officially-recognized pain point and that we as a team are going to be looking at improvements after 2.0 (if we don't manage to start before that). Very interesting discussion has already started on ways to improve it.

As I mentioned above, the bulk of GC performance improvements are made by applying heuristics to decrease the number of objects to mark and sweep. A secondary set of improvements can then be made, often at the code level, to make the GC's operations run faster. I'll call the first set of improvements "algorithmic", and the second set "implementation". So what we the Parrot developers need to do first is pick the right algorithms to use and then implement and optimize them.

Here is a general list of things we can do to improve GC performance in Parrot:
  1. Allocate fewer GCable objects. This is typically the result of user-level code optimization. So, Parrot needs optimizers that are GC-sympathetic. Parrot also allocates a number of STRINGs and PMCs for internal purposes, so we need to minimize that.
  2. Mark fewer objects. This comes from a good generational GC system where we segregate items based on how stable they are.
  3. Sweep fewer objects. I think chromatic's linked-list idea will help us significantly in this regard.

What I think we are leaning towards in Parrot is a system called a generational GC. A generational system uses the heuristic that items which have lived for a long time without being GC collected will tend to stay alive longer, and items which are recently allocated tend to die quickly. It's an acknowledgement that a lot of garbage is created for very short-term uses, and relatively few things stand the test of time. Here's a quick example using explicitly non-idiomatic Perl 5:

my @array = fill_array(100); # 100 items in the array
foreach my $item (@array) {
my $new_item = mangle($item);
say $new_item;

In this loop we create a lot of garbage. Every new instance of $new_item is a new collectible item which can be declared dead at the bottom of the loop and allocated anew at the top. Also, all the local variables used inside the mangle function follow the same life cycle. The only items that survive through the entire snippet are @array and it's contents.

Every time we mark, we have to mark @array and all it's contents, even though they are long-lived and will survive the entire loop. Every time we sweep we need to separate dead items $new_item and all the local variables created inside mangle() from @array and its persistently live set.

Generational GC works by saying that @array is long-lived and putting it into an older generation. Older generations contain objects which are, by definition, older and therefore less likely to die. If we aren't worried about the item dieing, then we don't need to explicitly mark it. At least, we don't need to mark it as often. We also don't need to sweep it, if we can find a good fast way to avoid that.