57
I Use This!
Inactive

News

Analyzed about 23 hours ago. based on code collected 1 day ago.
Posted over 11 years ago
So it has come to this. Threads.pm is up and running, bringing the ever-wanted threaded execution to the most popular Perl 6 implementation. You’re looking for TL;DR, aren’t you? Here’s what it’s capable of: use Threads; use Semaphore; my @elements; ... [More] my $free-slots = Semaphore.new(value => 10); my $full-slots = Semaphore.new(value => 0); sub produce($id) { my $i = 0; loop { $free-slots.wait; @elements.push: $i; $i++; $full-slots.post; } } sub consume($id) { loop { $full-slots.wait; my $a = @elements.shift; $free-slots.post; } } for 1..5 -> $i { async sub { produce($i) } } for 5..10 -> $i { async sub { consume($i) } } Doesn’t look that awesome. I mean, it’s just a producer-consumer problem, what’s the big deal? Let me repeat: OMG, RAKUDO HAS WORKING THREADS. So, once we’re done celebrating and dancing macarena all around, there’ll always be someone to ask “hold on, there’s gotta be a caveat. Something surely is missing, explain yourself!” I’ll be delighted to say “nope, everything’s there!”, but that’d make me a liar. Yeah, there are missing pieces. First, those aren’t really native threads – just green threads. Native OS threads are already implemented in Parrot VM, but NQP (the language that Rakudo is based on) still doesn’t support them, so before some volunteer comes along to fix them, you’ll still have to build parrot --without-threads (which means: use green threads, not OS threads) for Threads.pm to work. But fear not! The API is exactly the same, so once native threads are there, both Threads.pm and the code you write with it should work without any changes. But green threads are fine too! Except for one minor detail: whenever any of them blocks on IO, the entire Parrot comes to a halt. The plan is for Parrot threads scheduler to handle it nicely, but it’s not there yet, so if you expected nice and easy async IO, sorry, but you’re stuck on MuEvent :) Yep, we’re not really there yet. But I claim it’s closer than ever. We have working threads implementation. You can write code with that, and it’s not a PITA. Go for it! There’s a lot of room to improve it. I didn’t try really hard for Threads.pm to follow the concurrency synopsis (in my defense, I think niecza doesn’t follow it either :)), and I think that once we unleash a wolfpack of developers which can work towards something that we’ll all love to use. [Less]
Posted over 11 years ago
Parrot threads were featured on the perl advent calendar, day 11. Something cool about Perl 6 every day, in December. See http://perl6advent.wordpress.com/2012/12/11/day-11-parrot-threads/. read more
Posted over 11 years ago
On behalf of the Parrot team, I'm proud to announce Parrot 4.11.0, also known as "All together - Happy Birthday Lovebird". Parrot is a virtual machine aimed at running all dynamic languages. read more
Posted over 11 years ago
Debugging parrot strings were featured on the perl6 advent calendar, day 7. Something cool about Perl 6 every day, in December. See http://perl6advent.wordpress.com/2012/12/07/day-7-mimebase64-on-encoded-strings/. Shows how to debug into crazy string ... [More] encoding problems, when you are not sure if the core implementation, the library, the spec or the tests are wrong. It turned out, that the library and the tests were wrong. read more [Less]
Posted over 11 years ago
On behalf of the Parrot team, I'm proud to announce Parrot 4.10.0, also known as "Red-eared Parakeet". Parrot is a virtual machine aimed at running all dynamic languages. read more
Posted over 11 years ago
I might not be too bright. Either that or I might not have a great memory, or maybe I’m just a glutton for punishment. Remember the big IO system rewrite I completed only a few weeks ago? Remember how much of a huge hassle that turned into and how ... [More] burnt-out I got because of it? Apparently I don’t because I’m back at it again. Parrot hacker brrt came to me with a problem: After the io_cleanup merge he noticed that his mod_parrot project doesn’t build and pass tests anymore. This was sort of expected, he was relying on lots of specialized IO functionality and I broke a lot of specialized IO functionality. Mea culpa. I had a few potential fixes in mind, so I tossed around a few ideas with brrt, put together a few small branches and think I’ve got the solution. The problem, in a nutshell is this: In mod_parrot brrt was using a custom Winxed object as an IO handle. By hijacking the standard input and output handles he could convert requests on those handles into NCI calls to Apache and all would just work as expected. However with the IO system rewrite, IO API calls no longer redirect to method calls. Instead, they are dispatched to new IO VTABLE function calls which handle the logic for individual types. First question: How do we recreate brrt’s custom functionality, by allowing custom bytecode-level methods to implement core IO functionality for custom user types? My Answer: We add a new IO VTABLE, for “User” objects, which can redirect low-level requests to PMC method calls. Second Question: Okay, so how do we associate thisnew User IO VTABLE with custom objects? Currently the get_pointer_keyed_int VTABLE is used to get access to the handle’s IO_VTABLE* structure, but bytecode-level objects cannot use get_pointer_keyed_int. My Answer: For most IO-related PMC types, the kind of IO_VTABLE* to use is staticly associated with that type. Socket PMCs always use the Socket IO VTABLE. StringHandle PMCs always use the StringHandle IO VTABLE, etc. So, we can use a simple map to associate PMC types with specific IO VTABLEs. Any PMC type not in this map can default to the User IO VTABLE, making everything “just work”. Third Question: Hold your horses, what do you mean “most” IO-related PMC types have a static IO VTABLE? Which ones don’t and how do we fix it? My Answer: The big problem is the FileHandle PMC. Due to some legacy issues the FileHandle PMC has two modes of operation: normal File IO and Pipe IO. I guess these two ideas were conflated together long ago because internally the details are kind of similar: Both files and pipes use file descriptors at the OS level, and many of the library calls to use them are the same, so it makes sense not to duplicate a lot of code. However, there are some nonsensical issues that arise because Pipes and files are not the same: Files don’t have a notion of a “process ID” or an “exit status”. Pipes don’t have a notion of a “file position” and cannot do methods like seek or tell. Parrot uses the "p" mode specifier to tell a FileHandle to be in Pipe mode, which causes the IO system to select a between either the File or the Pipe IO VTABLE for each call. Instead of this terrible system, I suggest we separate out this logic into two PMC types: FileHandle (which, as it’s name suggests, operates on Files) and Pipe. By breaking up this one type into two, we can statically map individual IO VTABLEs to individual PMC types, and the system just works. Fourth Question: Once we have these maps in place, how do we do IO with user-defined objects? My Answer: The User IO VTABLE will redirect low-level IO requests into method calls on these PMCs. I’ll break IO_BUFFER* pointers out into a new PMC type of their own (IOBuffer) and users will be able to access and manipulate these things from any level. We’ll attach buffers to arbitrary PMCs using named properties, which means we can attach buffers to any PMC that needs them. So that’s my chain of thought on how to solve this problem. I’ve put together three branches to start working on this issue, but I don’t want to get too involved in this code until I get some buy-in from other developers. The FileHandle/Pipe change is going to break some existing code, so I want to make sure we’re cool with this idea before we make breaking changes and need to patch things like NQP and Rakudo. Here are the three branches I’ve started for this: whiteknight/pipe_pmc: This branch creates the new Pipe PMC type, separate from FileHandle. This is the breaking change that we need to make up front. whiteknight/io_vtable_lookup: This branch adds the new IOBuffer PMC type, implements the new IO VTABLE map, and implements the new properties-based logic for attaching buffers to PMCs. whiteknight/io_userhandle: This branch implements the new User IO VTABLE, which redirects IO requests to methods on PMC objects. Like I said, these are all very rough drafts so far. All these three branches build, but they don’t necessarily pass all tests or look very pretty. If people like what I’m doing and agree it’s a good direction to go in, I’ll continue work in earnest and see where it takes us. [Less]
Posted over 11 years ago
"All proofs inevitably lead to propositions which have no proof! All things are known because we want to believe in them." -- The Lady Jessica, to Bene Gesserit delegation On behalf of the Parrot team, I'm proud to announce Parrot 4.9.0, also known as "Proto-Hydra". Parrot is a virtual machine aimed at running all dynamic languages. read more
Posted over 11 years ago
On behalf of the Parrot team, I'm proud to announce Parrot 4.8.0, also known as "Spix's Macaw". Parrot is a virtual machine aimed at running all dynamic languages. read more
Posted over 11 years ago
First, some personal status: Personal Status I haven’t blogged in a little while, and there’s a few reasons for that. I’ll list them quickly: Work has been…tedious lately and when I come home I find that I want to spend much less time looking at ... [More] a computer, especially any computer that brings more stress into my life. Also, My computer at home generates a huge amount of stress. In addition to several physical problems with it, and the fact that I effectively do not have a working mouse (the built-in trackpad is extremely faulty, and the external USB mouse I had been using is now broken and the computer won’t even book if it’s plugged into the port), I’ve been having some software problems with lightdm and xserver crashing and needing to be restarted much more frequently than I think should be needed. We are planning to buy me a new one, but the budget won’t allow that until closer to xmas. The io_cleanup1 work took much longer than I had anticipated. I wrote a lot more posts about that branch than I ever published, and the ones I did publish were extremely repetitive (“It’s almost finished, any day now!”). Posting less means I got out of the habit of posting, which is a hard habit to be in and does require some effort. I’m going to do what I can to post something of a general Parrot update here, and hopefully I can get back in the habit of posting a little bit more regularly again. io_cleanup1 Status io_cleanup1 did indeed merge with almost no problems reported at all. I’m very happy about that work, and am looking forward to pushing the IO subsystem to the next level. Before I started io_cleanup1, I had some plans in mind for new features and capabilities I wanted to add to the VM. However, I quickly realized that the house had some structural problems to deal with before I could slap a new coat of paint on the walls. The structure is, I now believe, much better. I’ve still got that paint in the closet and eventually I’m going to throw it on the walls. The io_cleanup branch did take a lot of time and energy, much more than I initially expected. But, it’s over now and I’m happy with the results so now I can start looking on to the next project on my list. Threads Status Threads is very very close to being mergable. I’ve said that before and I’m sure I’ll have occasion to say it again. However there’s one remaining problem pointed out by tadzik, and if my diagnosis is correct it’s a doozie. The basic threads system, which I outlined in a series of blog posts ages ago goes like this: We cut out the need to have (most) locks, and therefore we cut out many possibilities of deadlock, by making objects writable only from the thread that owns them. Other threads can have nearly unfettered read access, but writes require sending a message to the owner thread to perform the update in a synchronized, orderly manner. By limiting cross-thread writes, we cut out many expensive mechanisms that would need to be used for writing data, like Software Transactional Memory (STM) and locks (and, therefore, associated deadlocks). It’s a system inspired closely by things like Erlang and some functional languages, although I’m not sure there’s any real prior art for the specifics of it. Maybe that’s because other people know it won’t work right. The only thing we can do is see how it works. The way nine implemented this system is to setup a Proxy type which intercepts and dispatches read/write requests as appropriate. When we pass a PMC from one thread to another, we instead create and pass a Proxy to it. Every read on the proxy redirects immediately to a read on the original target PMC. Every write causes a task to dispatch to the owner thread of the target PMC with update logic. Here’s some example code, adapted from the example tadzik had, which fails on the threads branch: function main[main](var args) { var x = 1; var t = new 'Task'(function() { x++; say(x); }); ${ schedule t }; ${ wait t }; say("Done!"); } Running this code on the threads branch creates anything from an assertion failure to a segfault. Why? This example creates a closure and schedules that closure as a task. The task scheduler assigns that task to the next open thread in the pool. Since it’s dispatching the Task on a new thread, all the data is proxied. Instead of passing a reference to Integer PMC x, we’re passing a Proxy PMC, which points to x. This part works as expected. When we invoke a closure, we update the context to point to the “outer” context, so that lexical variables (”x”, in this case) can be looked up correctly. However, instead of having an outer which is a CallContext PMC, we have a Proxy to a CallContext. An overarching problem with CallContext is that they get used, a lot. Every single register access, and almost all opcodes access at least one register, goes through the CallContext. Lexical information is looked up through the CallContext. Backtrace information is looked up in the CallContext. A few other things are looked up there as well. In short, CallContexts are accessed quite a lot. Because they are accessed so much, CallContexts ARE NOT dealt with through the normal VTABLE mechanism. Adding in an indirect function call for every single register access would be a huge performance burden. So, instead of doing that, we poke into the data directly and use the raw data pointers to get (and to cache) the things we need. And there’s the rub. For performance we need to be able to poke into a CallContext directly, but for threads we need to pass a Proxy instead of a CallContext. And the pointers for Proxy are not the same as the pointers for CallContext. See the problem? I identified this issue earlier in the week and have been thinking it over for a few days. I’m not sure I’ve found a workable solution yet. At least, I haven’t found a solution that wouldn’t impose some limitations on semantics. For instance, in the code example above, the implicit expectation is that the x variable lives on the main thread, but is updated on the second thread. And those updates should be reflected back on main after the wait opcode. The solution I think I have is to create a new dummy CallContext that would pass requests off to the Proxied LexPad. I’m not sure about some of the individual details, but overall I think this solution should solve our biggest problem. I’ll probably play with that this weekend and see if I can finally get this branch ready to merge. Other Status rurban has been doing some great cleanup work with native PBC, something that he’s been working on (and fighting to work on) for a long time. I’d really love to see more work done in this area in the future, because there are so many more opportunities for compatibility and interoperability at the bytecode level that we aren’t exploiting yet. Things have otherwise been a little bit slow lately, but between io_cleanup1, threads and rurban’s pbc work, we’re still making some pretty decent progress on some pretty important areas. If we can get threads fixed and merged soon, I’ll be on to the next project in the list. [Less]
Posted over 11 years ago
At YAPC::NA 2012 in Madison, WI I gave a lightning talk about basic improvements in Rakudo’s performance over the past couple of years.  Earlier today the video of the lightning talks session appeared on YouTube; I’ve clipped out my talk from the session into a separate video below.  Enjoy!