I Use This!
Very High Activity

News

Analyzed about 18 hours ago. based on code collected 2 days ago.
Posted over 4 years ago by [email protected] (ClassicHasClass)
UPDATE: Additional pictures are up at Talospace.Vintage Computer Festival West 2019 has come and gone, and I'll be posting many of the pictures on Talospace hopefully tonight or tomorrow. However, since this blog's audience is both Mozilla-related ... [More] (as syndicated on Planet Mozilla) and PowerPC-related, I've chosen to talk a little bit about old browsers for old machines (since, if you use TenFourFox, you're using a relatively recent browser on an old machine) since that was part of my exhibit this year as well as some of the Apple-related exhibits that were present. This exhibit I christened "RISCy Business," a collection of various classic RISC-based portables and laptops. The machines I had running for festival attendees were a Tadpole-RDI UltraBook IIi (UltraSPARC IIi) running Solaris 10, an IBM ThinkPad 860 (166MHz PowerPC 603e, essentially a PowerBook 1400 in a better chassis) running AIX 4.1, an SAIC Galaxy 1100 (HP PA-7100LC) running NeXTSTEP 3.3, and an RDI PrecisionBook C160L (HP PA-7300LC) running HP/UX 11.00. I also brought my Sun Ultra-3 (Tadpole Viper with a 1.2GHz UltraSPARC IIIi), though because of its prodigious heat issues I didn't run it at the show. None of these machines retailed for less than ten grand, if they were sold commercially at all (the Galaxy wasn't). Here they are, for posterity: The UltraBook played a Solaris port of Quake II (software-rendered) and Firefox 2, the ThinkPad ran AIX's Ultimedia Video Monitor application (using the machine's built-in video capture hardware and an off-the-shelf composite NTSC camera) and Netscape Navigator 4.7, the Galaxy ran the standard NeXTSTEP suite along with some essential apps like OmniWeb 2.7b3 and Doom, and the PrecisionBook ran the HP/UX ports of the Frodo Commodore 64 emulator and Microsoft Internet Explorer 5.0 SP1. (Yes, IE for Unix used to be a thing.) Now, of course, period-correct computers demand a period-correct website viewable on the browsers of the day, which is the site being displayed on screen and served to the machines from a "back office" Raspberry Pi 3. However, devising a late 1990s site means a certain, shall we say, specific aesthetic and careful analysis of vital browser capabilities for maximum impact. In these enlightened times no one seems to remember any of this stuff and what HTML 4.01 features worked where, so here is a handy table for your next old workstation browser demonstration (using a , of course): frames animated GIF Mozilla Suite 1.7 &check &check &check &check Firefox 2 &check &check &check &check Netscape Navigator 4.7 &check &check &check &check Internet Explorer for UNIX 5.0 SP1 &check &check &check &cross Firefox 52 &check &check &check &cross OmniWeb 2.7b3 &check &check &cross &cross Basically I ended up looting oocities and my old files for every obnoxious animated GIF and background I could find. This yielded a website that was surely authentic for the era these machines inhabited, and demonstrated exceptionally good taste. By popular request, the website the machines are displaying is now live on Floodgap (after a couple minor editorial changes). I think the exhibit was pretty well received: Probably the star of the show and more or less on topic for this blog was the huge group of Apple I machines (many, if not most, still in working order). They were under Plexiglas, and given that there was seven-figures'-worth of fruity artifacts all in one place, a security guard impassively watched the gawkers. The Apple I owners' club is there to remind you that you, of course, don't own an Apple I. A working Xerox 8010, better known as the Xerox Star and one of the innovators of the modern GUI paradigm (plus things like, you know, Ethernet), was on display along with an emulator. Steve Jobs saw one at PARC and we all know how that ended. One of the systems there, part of the multi-platform Quake deathmatch network exhibit, was a Sun Ultra workstation running an honest-to-goodness installation of the Macintosh Application Environment emulation layer. Just for yuks, it was simultaneously running Windows on its SunPCI x86 side-card as well: The Quake exhibitors also had a Daystar Millenium in a lovely jet-black case, essentially a Daystar Genesis MP+. These were some of the few multiprocessor Power Macs (and clones at that) before Apple's own dual G4 systems emerged. This system ran four 200MHz PowerPC 604e CPUs, though of course only application software designed for multiprocessing could take advantage of them. A pair of Pippins were present at the exhibit next to the Quake guys', Apple's infamous attempt to turn the Power Mac into a home console platform and fresh off being cracked: A carpal Apple Newtons (an eMate and several Message Pads) also stowed up so you card find art if the headwatering recognition was as dab as they said it wan. There were also a couple Apple II systems hanging around (part of a larger exhibit on 6502-based home computers, hence the Atari 130XE next to it). I'll be putting up the rest of the photos on Talospace, including a couple other notable historical artifacts and the IBM 604e systems the Quake exhibit had brought along, but as always it was a great time and my exhibit was not judged to be a fire hazard. You should go next year. The moral of this story is the next time you need to make a 1990s web page that you can actually view on a 1990s browser, not that phony CSS and JavaScript crap facsimile they made up for Captain Marvel, now you know what will actually show a blinking scrolling marquee in a frame when you ask for one. Maybe I should stick an -powered guestbook in there too. (For some additional pictures, see our entry at Talospace.) [Less]
Posted over 4 years ago by [email protected] (ClassicHasClass)
Vintage Computer Festival West 2019 has come and gone, and I'll be posting many of the pictures on Talospace hopefully tonight or tomorrow. However, since this blog's audience is both Mozilla-related (as syndicated on Planet Mozilla) and ... [More] PowerPC-related, I've chosen to talk a little bit about old browsers for old machines (since, if you use TenFourFox, you're using a relatively recent browser on an old machine) since that was part of my exhibit this year as well as some of the Apple-related exhibits that were present. This exhibit I christened "RISCy Business," a collection of various classic RISC-based portables and laptops. The machines I had running for festival attendees were a Tadpole-RDI UltraBook IIi (UltraSPARC IIi) running Solaris 10, an IBM ThinkPad 860 (166MHz PowerPC 603e, essentially a PowerBook 1400 in a better chassis) running AIX 4.1, an SAIC Galaxy 1100 (HP PA-7100LC) running NeXTSTEP 3.3, and an RDI PrecisionBook C160L (HP PA-7300LC) running HP/UX 11.00. I also brought my Sun Ultra-3 (Tadpole Viper with a 1.2GHz UltraSPARC IIIi), though because of its prodigious heat issues I didn't run it at the show. None of these machines retailed for less than ten grand, if they were sold commercially at all (the Galaxy wasn't). Here they are, for posterity: The UltraBook played a Solaris port of Quake II (software-rendered) and Firefox 2, the ThinkPad ran AIX's Ultimedia Video Monitor application (using the machine's built-in video capture hardware and an off-the-shelf composite NTSC camera) and Netscape Navigator 4.7, the Galaxy ran the standard NeXTSTEP suite along with some essential apps like OmniWeb 2.7b3 and Doom, and the PrecisionBook ran the HP/UX ports of the Frodo Commodore 64 emulator and Microsoft Internet Explorer 5.0 SP1. (Yes, IE for Unix used to be a thing.) Now, of course, period-correct computers demand a period-correct website viewable on the browsers of the day, which is the site being displayed on screen and served to the machines from a "back office" Raspberry Pi 3. However, devising a late 1990s site means a certain, shall we say, specific aesthetic and careful analysis of vital browser capabilities for maximum impact. In these enlightened times no one seems to remember any of this stuff and what HTML 4.01 features worked where, so here is a handy table for your next old workstation browser demonstration (using a , of course): frames animated GIF Mozilla Suite 1.7 &check &check &check &check Firefox 2 &check &check &check &check Netscape Navigator 4.7 &check &check &check &check Internet Explorer for UNIX 5.0 SP1 &check &check &check &cross Firefox 52 &check &check &check &cross OmniWeb 2.7b3 &check &check &cross &cross Basically I ended up looting oocities and my old files for every obnoxious animated GIF and background I could find. This yielded a website that was surely authentic for the era these machines inhabited, and demonstrated exceptionally good taste. By popular request, the website the machines are displaying is now live on Floodgap (after a couple minor editorial changes). I think the exhibit was pretty well received: Probably the star of the show and more or less on topic for this blog was the huge group of Apple I machines (many, if not most, still in working order). They were under Plexiglas, and given that there was seven-figures'-worth of fruity artifacts all in one place, a security guard impassively watched the gawkers. The Apple I owners' club is there to remind you that you, of course, don't own an Apple I. One of the systems there, part of the multi-platform Quake deathmatch network exhibit, was a Sun Ultra workstation running an honest-to-goodness installation of the Macintosh Application Environment emulation layer. Just for yuks, it was simultaneously running Windows on its SunPCI x86 side-card as well: The Quake exhibitors also had a Daystar Millenium in a lovely jet-black case, essentially a Daystar Genesis MP+. These were some of the few multiprocessor Power Macs (and clones at that) before Apple's own dual G4 systems emerged. This system ran four 200MHz PowerPC 604e CPUs, though of course only application software designed for multiprocessing could take advantage of them. A pair of Pippins were present at the exhibit next to the Quake guys', Apple's infamous attempt to turn the Power Mac into a home console platform and fresh off being cracked: A carpal Apple Newtons (an eMate and several Message Pads) also stowed up so you card find art if the headwatering recognition was as dab as they said it wan. There were also a couple Apple II systems hanging around (part of a larger exhibit on 6502-based home computers, hence the Atari 130XE next to it). I'll be putting up the rest of the photos on Talospace, including a couple other notable historical artifacts and the IBM 604e systems the Quake exhibit had brought along, but as always it was a great time and my exhibit was not judged to be a fire hazard. You should go next year. The moral of this story is the next time you need to make a 1990s web page that you can actually view on a 1990s browser, not that phony CSS and JavaScript crap facsimile they made up for Captain Marvel, now you know what will actually show a blinking scrolling marquee in a frame when you ask for one. Maybe I should stick an -powered guestbook in there too. [Less]
Posted over 4 years ago by Siyu Chen
This summer I am very lucky to join the Hubs by Mozilla as a technical artist intern. Over the 12 weeks that I was at Mozilla, I worked on two different projects. My first project is about particle systems, the thing that I always have great ... [More] interest in. I was developing the particle system feature for Spoke, the 3D editor which you can easily create a 3D scene and publish to Hubs. Particle systems are a technique that has been used in a wide range of game physics, motion graphics and computer graphics related fields. They are usually composed of a large number of small sprites or other objects to simulate some chaotic system or natural phenomena. Particles can make a huge impact on the visual result of an application and in virtual and augmented reality, it can deepen the immersive feeling greatly. Particle systems can be incredibly complex, so for this version of the Particle System, we wanted to separate the particle system from having heavy behaviour controls like some particle systems from native game engines, only keeping the basic attributes that are needed. The Spoke particle system can be separated into two parts, particles and the emitter. Each particle, has a texture/sprite, lifetime, age, size, color, and velocity as it’s basic attributes. The emitter is more simple, as it only has properties for its width and height and information about the particle count (how many particles it can emit per life circle). By changing the particle count and the emitter size, users can easily customize a particle system for different uses, like to create falling snow in a wintry scene or add a small water splash to a fountain. Changing the emitter size Changing the number of particles from 100 to 200 You can also change the opacities and the colors of the particles. The actual color and opacity values are interpolated between start, middle and end colors/opacities. And for the main visuals, we can change the sprites to the image we want by using a URL to an image, or choosing from your local assets. What does a particle’s life cycle look like? Let’s take a look at this chart: Every particle is born with a random negative initial age, which can be adjusted through the Age Randomness property after it’s born, its age will keep growing as time goes by. When its age is bigger than the total lifetime (formed by Lifetime and Lifetime Randomness), the particle will die immediately and be re-assigned a negative initial age, then start over again. The Lifetime here is not the actual lifetime that every particle will live, in order to not have all particles disappear at the same time, we have this Lifetime Randomness attribute to vary the actual lifetime of each particle. The higher the Lifetime Randomness, the larger the differentiation will be among the actual lifetimes of whole particle system. There is another attribute called Age Randomness, which is similar to Lifetime Randomness. The difference is that Age Randomness is used to vary the negative initial ages to have a variation on the birth of the particles, while Lifetime Randomness is to have the variation on the end of their lives. Every particle also has velocity properties across the x, y and x axis. By adjusting the velocity in three dimensions, users can have a better control on particles’ behaviours. For example, simulation gravity or wind that kind of simple phenomena. With angular velocity, you can also control on the rotation of the particle system to have a more natural and dynamic result. The velocity, color and size properties all have the option to use different interpolation functions between their start, middle and end stages. The particle system is officially out on Spoke, so go try it out and let us know what you think! Avatar Display Emojis My other project is about the avatar emoji display screen on Hubs. I did the design of the emojis images, UI/UX design and the actual implementation of this feature. It’s actually a straightforward project: I needed to figure out the style of the emoji display on the chest screen, some graphics design on the interface level, make decisions on the interaction flow and implement it in Hubs. Evolution of the display emoji design. We ultimately decided to have the smooth edge emoji with some bloom effect. Final version of the display emoji design Icon design for the menu user interface Interaction design using Hubs display styles Demo: When you enter pause mode on Hubs, the emoji box will show up, replacing the chat box, and you can change your avatar’s screen to one of the emojis offered. I want to say thank you to Hubs for having me this summer. I learned a lot from all the talented people in Hubs, especially Robert, Jim, Brian and Greg who helped me a lot to overcome the difficulties I came across. The encouragement and support from the team is the best thing I got this summer. Miss you guys already! [Less]
Posted over 4 years ago by Siyu Chen
This summer I am very lucky to join the Hubs by Mozilla as a technical artist intern. Over the 12 weeks that I was at Mozilla, I worked on two different projects. My first project is about particle systems, the thing that I always have great interest ... [More] in. I was developing the particle system feature for Spoke, the 3D editor which you can easily create a 3D scene and publish to Hubs. Particle systems are a technique that has been used in a wide range of game physics, motion graphics and computer graphics related fields. They are usually composed of a large number of small sprites or other objects to simulate some chaotic system or natural phenomena. Particles can make a huge impact on the visual result of an application and in virtual and augmented reality, it can deepen the immersive feeling greatly. Particle systems can be incredibly complex, so for this version of the Particle System, we wanted to separate the particle system from having heavy behaviour controls like some particle systems from native game engines, only keeping the basic attributes that are needed. The Spoke particle system can be separated into two parts, particles and the emitter. Each particle, has a texture/sprite, lifetime, age, size, color, and velocity as it’s basic attributes. The emitter is more simple, as it only has properties for its width and height and information about the particle count (how many particles it can emit per life circle). By changing the particle count and the emitter size, users can easily customize a particle system for different uses, like to create falling snow in a wintry scene or add a small water splash to a fountain. Changing the emitter size Changing the number of particles from 100 to 200 You can also change the opacities and the colors of the particles. The actual color and opacity values are interpolated between start, middle and end colors/opacities. And for the main visuals, we can change the sprites to the image we want by using a URL to an image, or choosing from your local assets. What does a particle’s life cycle look like? Let’s take a look at this chart: Every particle is born with a random negative initial age, which can be adjusted through the Age Randomness property after it’s born, its age will keep growing as time goes by. When its age is bigger than the total lifetime (formed by Lifetime and Lifetime Randomness), the particle will die immediately and be re-assigned a negative initial age, then start over again. The Lifetime here is not the actual lifetime that every particle will live, in order to not have all particles disappear at the same time, we have this Lifetime Randomness attribute to vary the actual lifetime of each particle. The higher the Lifetime Randomness, the larger the differentiation will be among the actual lifetimes of whole particle system. There is another attribute called Age Randomness, which is similar to Lifetime Randomness. The difference is that Age Randomness is used to vary the negative initial ages to have a variation on the birth of the particles, while Lifetime Randomness is to have the variation on the end of their lives. Every particle also has velocity properties across the x, y and x axis. By adjusting the velocity in three dimensions, users can have a better control on particles’ behaviours. For example, simulation gravity or wind that kind of simple phenomena. With angular velocity, you can also control on the rotation of the particle system to have a more natural and dynamic result. The velocity, color and size properties all have the option to use different interpolation functions between their start, middle and end stages. The particle system is officially out on Spoke, so go try it out and let us know what you think! Avatar Display Emojis My other project is about the avatar emoji display screen on Hubs. I did the design of the emojis images, UI/UX design and the actual implementation of this feature. It’s actually a straightforward project: I needed to figure out the style of the emoji display on the chest screen, some graphics design on the interface level, make decisions on the interaction flow and implement it in Hubs. Evolution of the display emoji design. We ultimately decided to have the smooth edge emoji with some bloom effect. Final version of the display emoji design Icon design for the menu user interface Interaction design using Hubs display styles Demo: When you enter pause mode on Hubs, the emoji box will show up, replacing the chat box, and you can change your avatar’s screen to one of the emojis offered. I want to say thank you to Hubs for having me this summer. I learned a lot from all the talented people in Hubs, especially Robert, Jim, Brian and Greg who helped me a lot to overcome the difficulties I came across. The encouragement and support from the team is the best thing I got this summer. Miss you guys already! [Less]
Posted over 4 years ago by chuttenc
  Back in January I was privileged to speak at StarCon 2019 at the University of Waterloo about responsible data collection. It was a bitterly-cold weekend with beautiful sun dogs ringing the morning sun. I spent it inside talking about good ways to ... [More] collect data and how Mozilla serves as a concrete example. It’s 15 minutes short and aimed at a general audience. I hope you like it. I encourage you to also sample some of the other talks. Two I remember fondly are Aaron Levin’s “Conjure ye File System, transmorgifier” about video games that look like file systems and Cory Dominguez’s lovely analysis of Moby Dick editions in “or, the whale“. Since I missed a whole day, I now get to look forward to fondly discovering new ones from the full list. :chutten [Less]
Posted over 4 years ago by mhoye
The Public Library of Science‘s Ten Simple Rules series can be fun reading; they’re introductory papers intended to provide novices or non-domain-experts with a set of quick, evidence-based guidelines for dealing with common problems in and around ... [More] various fields, and it’s become a pretty popular, accessible format as far as scientific publication goes. Topic-wise, they’re all over the place: protecting research integrity, creating a data-management plan and taking advantage of Github are right there next to developing good reading habits, organizing an unconference or drawing a scientific comic, and lots of them are kind of great. I recently had the good fortune to be co-author on one of them that’s right in my wheelhouse and has recently been accepted for publication: Ten Simple Rules for Helping Newcomers Become Contributors to Open Projects. They are, as promised, simple: Be welcoming. Help potential contributors evaluate if the project is a good fit. Make governance explicit. Keep knowledge up to date and findable. Have and enforce a code of conduct. Develop forms of legitimate peripheral participation. Make it easy for newcomers to get started. Use opportunities for in-person interaction – with care. Acknowledge all contributions, and Follow up on both success and failure. You should read the whole thing, of course; what we’re proposing are evidence-based practices, and the details matter, but the citations are all there. It’s been a privilege to have been a small part of it, and to have done the work that’s put me in the position to contribute. [Less]
Posted over 4 years ago by pmcclard
Hello SUMO community, I have a couple announcements for today. I’d like you all to welcome our two new community managers. First off Kiki has officially joined the SUMO team as a community manager. Kiki has been filling in with Konstantina and Ruben ... [More] on our social support activities. We had an opportunity to bring her onto the SUMO team full time starting last week. She will be transitioning out of her responsibilities at the Community Development Team and will be continuing her work on the social program as well as managing SUMO days going forward. In addition, we have hired a new SUMO community manager to join the team. Please welcome Giulia Guizzardi to the SUMO team. You can find her on the forums as gguizzardi. Below is a short introduction: Hey everyone, my name is Giulia Guizzardi, and I will be working as a Support Community Manager for Mozilla.  I am currently based in Berlin, but I was born and raised in the north-east of Italy. I studied Digital Communication in Italy and Finland, and worked for half a year in Poland. My greatest passion is music, I love participating in festivals and concerts along with collecting records and listening to new releases all day long. Other than that, I am often online, playing video games (Firewatch at the moment) or scrolling Youtube/Reddit. I am really excited for this opportunity and happy to work alongside the community! Now that we have two new community managers we will work with Konstantina and Ruben to transition their work to Kiki and Giulia. We’re also kicking off work to create a community strategy which we will be seeking feedback for soon. In the meantime, please help me welcome Kiki and Giulia to the team. [Less]
Posted over 4 years ago by Henrik Skupin
Note: This article is based on Firefox builds as available for download at least until August 7th, 2019. In case you want to go through those steps on your own, I cannot guarantee that it will lead to the same effects if newer builds are used. So a ... [More] couple of months ago when I was looking for some new interesting and challenging sport events, which I could participate in to reach my own limits, I was made aware of the Mega Hike event. It sounded like fun and it was also good to see that one particular event is annually organized in my own city since 2018. As such I accepted it together with a friend, and we had an amazing day. But hey… that’s not what I actually want to talk about in this post! The thing I was actually more interested in while reading content on this web site, was the high CPU load of Firefox while the page was open in my browser. Once the tab got closed the CPU load dropped back to normal numbers, and went up again once I reopened the tab. Given that I haven’t had that much time to further investigate this behavior, I simply logged bug 1530071 to make people aware of the problem. Sadly the bug got lost in my incoming queue of daily bug mail, and I missed to respond, which itself lead in no further progress been made. Yesterday I stumbled over the website again, and by any change have been made aware of the problem again. Nothing seemed to have been changed, and Firefox Nightly (70.0a1) was still using around 70% of CPU even with the tab’s content not visible; means moved to a background tab. Given that this is a serious performance and power related issue I thought that investigation might be pretty helpful for developers. In the following sections I want to lay out the steps I did to nail down this problem. Energy consumption of Firefox processes While for the first look the Activity Monitor of MacOS is helpful to get an impression about the memory usage and CPU load of Firefox, it’s a bit hard to see how much each and every open tab is actually using. You could try to match the listed process ids with a specific tab in the browser by hovering over the appropriate tab title, but the displayed tooltip only contains the process id in  Firefox Nightly builds, but not in beta or final releases. Further multiple tabs will currently share the same process, and as such the displayed value in the Activity Monitor is a shared. To further drill down the CPU load to a specific tab, Firefox has the about:performance page, which can be opened by typing the value into the location bar.  It’s basically an internal task manager to inspect the energy impact and memory consumption of each tab. Even more helpful is the option to expand the view for sub frames, which are usually used to embed external content. In case of the Megamarsch page there are three of those, and one actually spikes out with consuming nearly all the energy as used by the tab. As such it might be a good chance that this particular iframe from YouTube, which is embedding a video, is the problem. To verify that the integrated Firefox Developer Tools can be used. Specially the Page Inspector will help us, which allows to search for specific nodes, CSS classes, or others, and then interact with them. To open it, check the Tools > Web Developer sub menu from inside the main menu. Given that the URI of the iframe is known, lets search for it in the inspector: When running the search it will not be the first result as found, so continue until the expected iframe is highlighted in the Inspector pane. Now that we found the embedded content lets delete the node by opening the context menu and clicking Delete Node. If it was the problem, the CPU load should be normal again. Sadly, and as you will notice when doing it yourself, it’s not the case. Which also means something else on that page is causing it. The easiest solution to figure out which node really causes the spike, is to simply delete more nodes on that page. Start at a higher level and delete the header, footer, or any sidebars first. By doing that always keep an eye on the Activity Monitor, and check if the CPU load maybe has dropped. Once that is the case undo the last step, so that the causing node is getting inserted again. Then remove all sibling nodes, so only the causing node remains. Now drill down even further until no more child nodes remain. As advice don’t forget to change the update frequency so that values are updated each second, and revert it back after you are done. In our case the following node which is related to the cart icon remains: So some kind of loading indicator seems to trigger Firefox to maybe repaint a specific area on the screen. To verify that remove the extra CSS class definitions. Once the icon-web-loading-spinner class has been removed it’s fine. Note that when hovering over the node and the class still be set, a spinning rectangle which is a placeholder for the real element can even be seen. Checking the remaining stylesheets which get included, the one which remains (after removing all others without a notable effect) is from assets.jimstatic.com. And for the particular CSS class it holds the following animation: @keyframes spinit{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(360deg);transform:rotate(360deg)}} More interesting is that this specific class defines opacity: 0, which basically means that the node shouldn’t be visible at all, and no re-painting should happen until the node has been made visible. With these kind of information found I updated the before mentioned bug with all the newly found details, and handed it over to the developers. Everyone who wants to follow the progress of fixing it, can subscribe as part of the CC list and will be automatically notified by Bugzilla for updates. If you found this post useful please let me know, and I will write more of them in the future. [Less]
Posted over 4 years ago by sheppy
One of my most dreaded tasks is that of estimating how long tasks will take to complete while doing sprint planning. I have never been good at this, and it has always felt like time stolen away from the pool of hours available to do what I can’t help ... [More] thinking of as “real work.” While I’m quite a bit better at the time estimating process than I was a decade ago—and perhaps infinitely better at it than I was 20 years ago—I still find that I, like a lot of the creative and technical professionals I know, dread the process of poring over bug and task lists, project planning documents, and the like in order to estimate how long things will take to do. This is a particularly frustrating process when dealing with tasks that may be nested, have multiple—often not easily detected ahead of time—dependencies, and may involve working with technologies that aren’t actually as ready for prime time as expected. Add to that the fact that your days are filled with distractions, interruptions, and other tasks you need to deal with, and predicting how long a given project will take can start to feel like a guessing game. The problem isn’t just one of coming up with the estimates. There’s a more fundamental problem of how to measure time. Do you estimate projects in terms of the number of work hours you’ll invest in them? The number of days or weeks you’ll spend on each task? Or some other method of measuring duration? Hypothetical ideal days On the MDN team, we have begun over the past year to use a time unit we call the hypothetical ideal day or simply ideal day. This is a theoretical time unit in which you are able to work, uninterrupted, on a project for an entire 8-hour work day. A given task may take any appropriate number of ideal days to complete, depending on its size and complexity. Some tasks may take less than a single ideal day, or may otherwise require a fractional number of ideal days (like 0.5 ideal days, or 1.25 ideal days). We generally round to a quarter of a day. There obviously isn’t actually any such thing as an ideal, uninterrupted day (hence the words “hypothetical” and “theoretical” earlier in this paragraph). Even on one’s best day, you have to stop to eat, to stretch, and do do any number of other things that you have to do during a day of work. But that’s the point to the ideal day unit: by building right into the unit the understanding that you’re not explicitly accounting for these interruptions in the time value, you can reinforce the idea that schedules are fragile, and that every time a colleague or your manager (or anyone else) causes you to be distracted from your planned tasks, the schedule will slip. That means that each ideal day may actually last anywhere from one to three or more days in the real world, depending on what’s going on. Every meeting, phone call, sidetracking onto an unscheduled task, bad night’s sleep, high pollen day, or bad news day can impact the amount of real-life time required to complete one ideal day’s worth of work. Ideal days in sprint planning The goal, then, during sprint planning is to do your best to leave room for those distractions when mapping ideal days to the actual calendar. Our sprints on the MDN team are 12 business days long. When selecting tasks to attempt to accomplish during a sprint, we start by having each team member count up how many of those 12 days they will be available for work. This involves subtracting from that 12-day sprint any PTO days, company or local holidays, substantial meetings, and so forth. When calculating my available days, I like to subtract a rough number of partial days to account for any appointments that I know I’ll have. We then typically subtract about 20% (or a day or two per sprint, although the actual amount varies from person to person based on how often they tend to get distracted and how quickly they rebound), to allow for distractions and sidetracking, and to cover typical administrative needs. The result is a rough estimate of the number of ideal days we’re available to work during the sprint. With that in hand, each member of the team can select a group of tasks that can probably be completed during the number of ideal days we estimate they’ll have available during the sprint. But we know going in that these estimates are in terms of ideal days, not actual business days, and that if anything unanticipated happens, the mapping of ideal days to actual days we did won’t match up anymore, causing the work to take longer than anticipated. This understanding is fundamental to how the system works; by going into each sprint knowing that our mapping of ideal days to actual days is subject to external influences beyond our control, we avoid many of the anxieties that come from having rigid or rigid-feeling schedules. For your consideration For example, let’s consider a standard 12-business-day MDN sprint which spans my birthday as well as Martin Luther King, Jr. Day, which is a US Federal holiday. During those 12 days, I also have two doctor appointments scheduled which will have me out of the office for roughly half a day total, and I have about a day’s worth of meetings on my schedule as of sprint planning time. Doing the math, then, we find that I have 8.5 days available to work. Knowing this, I then review the various task lists and find a total of around 8 to 8.5 days worth of work to do. Perhaps a little less if I think the odds are good that more time will be occupied with other things than the calendar suggests. For example, if my daughter is sick, there’s a decent chance I will be too in a few days, so I might take on just a little less work for the sprint. As the sprint begins, then, I have an estimated 8 ideal days worth of work to do during the 12-day sprint. Because of the “ideal day” system, everyone on the team knows that if there are any additional interruptions—even short ones—the odds of completing everything on the list are reduced. As such, this system not only helps make it easier to estimate how long tasks will take, but also helps to reinforce with colleagues that we need to stay focused as much as possible, in order to finish everything on time. If I don’t finish everything on the sprint plan by the end of the sprint, we will discuss it briefly during our end-of-sprint review to see if there’s any adjustment we need to make in future planning sessions, but it’s done with the understanding that life happens, and that sometimes delays just can’t be anticipated or avoided. On the other hand, if I happen to finish before the sprint is over, I have time to get extra work done, so I go back to the task lists, or to my list of things I want to get done that are not on the priority list right now, and work on those things through the end of the sprint. That way, I’m able to continue to be productive regardless of how accurate my time estimates are. I can work with this In general, I really like this way of estimating task schedules. It does a much better job of allowing for the way I work than any other system I’ve been asked to work within. It’s not perfect, and the overhead is a little higher than I’d like, but by and large it does a pretty god job. That’s not to say we won’t try another, possibly better, way of handling the planning process in the future But for now, my work days are as ideal as can be. [Less]
Posted over 4 years ago by sheppy
One of my most dreaded tasks is that of estimating how long tasks will take to complete while doing sprint planning. I have never been good at this, and it has always felt like time stolen away from the pool of hours available to do what I can’t help ... [More] thinking of as “real work.” While I’m quite a bit better at the time estimating process than I was a decade ago—and perhaps infinitely better at it than I was 20 years ago—I still find that I, like a lot of the creative and technical professionals I know, dread the process of poring over bug and task lists, project planning documents, and the like in order to estimate how long things will take to do. This is a particularly frustrating process when dealing with tasks that may be nested, have multiple—often not easily detected ahead of time—dependencies, and may involve working with technologies that aren’t actually as ready for prime time as expected. Add to that the fact that your days are filled with distractions, interruptions, and other tasks you need to deal with, and predicting how long a given project will take can start to feel like a guessing game. The problem isn’t just one of coming up with the estimates. There’s a more fundamental problem of how to measure time. Do you estimate projects in terms of the number of work hours you’ll invest in them? The number of days or weeks you’ll spend on each task? Or some other method of measuring duration? Hypothetical ideal days On the MDN team, we have begun over the past year to use a time unit we call the hypothetical ideal day or simply ideal day. This is a theoretical time unit in which you are able to work, uninterrupted, on a project for an entire 8-hour work day. A given task may take any appropriate number of ideal days to complete, depending on its size and complexity. Some tasks may take less than a single ideal day, or may otherwise require a fractional number of ideal days (like 0.5 ideal days, or 1.25 ideal days). There are a couple of additional guidelines we try to follow: we generally round to a quarter of a day, and we almost always keep our user stories’ estimates at five ideal days or less, with two or three being preferable. The larger a task is, the more likely it is that it’s really a group of related tasks. There obviously isn’t actually any such thing as an ideal, uninterrupted day (hence the words “hypothetical” and “theoretical” a couple of paragraphs ago). Even on one’s best day, you have to stop to eat, to stretch, and do do any number of other things that you have to do during a day of work. But that’s the point to the ideal day unit: by building right into the unit the understanding that you’re not explicitly accounting for these interruptions in the time value, you can reinforce the idea that schedules are fragile, and that every time a colleague or your manager (or anyone else) causes you to be distracted from your planned tasks, the schedule will slip. Ideal days in sprint planning The goal, then, during sprint planning is to do your best to leave room for those distractions when mapping ideal days to the actual calendar. Our sprints on the MDN team are 12 business days long. When selecting tasks to attempt to accomplish during a sprint, we start by having each team member count up how many of those 12 days they will be available for work. This involves subtracting from that 12-day sprint any PTO days, company or local holidays, substantial meetings, and so forth. When calculating my available days, I like to subtract a rough number of partial days to account for any appointments that I know I’ll have. We then typically subtract about 20% (or a day or two per sprint, although the actual amount varies from person to person based on how often they tend to get distracted and how quickly they rebound), to allow for distractions and sidetracking, and to cover typical administrative needs. The result is a rough estimate of the number of ideal days we’re available to work during the sprint. With that in hand, each member of the team can select a group of tasks that can probably be completed during the number of ideal days we estimate they’ll have available during the sprint. But we know going in that these estimates are in terms of ideal days, not actual business days, and that if anything unanticipated happens, the mapping of ideal days to actual days we did won’t match up anymore, causing the work to take longer than anticipated. This understanding is fundamental to how the system works; by going into each sprint knowing that our mapping of ideal days to actual days is subject to external influences beyond our control, we avoid many of the anxieties that come from having rigid or rigid-feeling schedules. For your consideration For example, let’s consider a standard 12-business-day MDN sprint which spans my birthday as well as Martin Luther King, Jr. Day, which is a US Federal holiday. During those 12 days, I also have two doctor appointments scheduled which will have me out of the office for roughly half a day total, and I have about a day’s worth of meetings on my schedule as of sprint planning time. Doing the math, then, we find that I have 8.5 days available to work. Knowing this, I then review the various task lists and find a total of around 8 to 8.5 days worth of work to do. Perhaps a little less if I think the odds are good that more time will be occupied with other things than the calendar suggests. For example, if my daughter is sick, there’s a decent chance I will be too in a few days, so I might take on just a little less work for the sprint. As the sprint begins, then, I have an estimated 8 ideal days worth of work to do during the 12-day sprint. Because of the “ideal day” system, everyone on the team knows that if there are any additional interruptions—even short ones—the odds of completing everything on the list are reduced. As such, this system not only helps make it easier to estimate how long tasks will take, but also helps to reinforce with colleagues that we need to stay focused as much as possible, in order to finish everything on time. If I don’t finish everything on the sprint plan by the end of the sprint, we will discuss it briefly during our end-of-sprint review to see if there’s any adjustment we need to make in future planning sessions, but it’s done with the understanding that life happens, and that sometimes delays just can’t be anticipated or avoided. On the other hand, if I happen to finish before the sprint is over, I have time to get extra work done, so I go back to the task lists, or to my list of things I want to get done that are not on the priority list right now, and work on those things through the end of the sprint. That way, I’m able to continue to be productive regardless of how accurate my time estimates are. I can work with this In general, I really like this way of estimating task schedules. It does a much better job of allowing for the way I work than any other system I’ve been asked to work within. It’s not perfect, and the overhead is a little higher than I’d like, but by and large it does a pretty god job. That’s not to say we won’t try another, possibly better, way of handling the planning process in the future But for now, my work days are as ideal as can be. [Less]