I Use This!
Very High Activity

News

Analyzed about 12 hours ago. based on code collected about 18 hours ago.
Posted over 9 years ago by Darren Herman
With the 10th anniversary update to Firefox, there was an important update to the new tab experience, promoting Tiles to the Firefox stable build, and making them available to hundreds of millions of users around the world.  Today we are excited to ... [More] announce our first two sponsored Tiles partners: CVS Health and their media agency Mindshare North America, and Booking.com. What are Tiles for? For years, the new tab page in Firefox was unique in being intentionally blank – but by 2012, we learned that we could facilitate many users’ workflow through the new tab page.  We added thumbnails based on a calculation of “freceny” (frequency and recency of a user’s browsing history, essentially the same way that the Awesome bar calculates relevance). We learned that many users find these history thumbnails useful; but we were not entirely satisfied with the feature.  Thumbnails might be broken, and the experience could be much more dynamic. We need to be able to use our voice with our users, for example to raise awareness around issues that affect the future of the Internet, and to promote those causes that we believe are important to that future. We have been exploring the content discovery space.  There are many aspects of digital advertising that concern us: from the overall integrity of the advertising system on the Web, to the user having control over what happens to their data, and then to what happens to the data once the user has given their consent.  I have been writing for a while on this blog about the principles we follow and the ideas we have to improve digital advertising. Lastly, we wanted to explore ways to contribute to the sustainability of the project in a way that we felt could align with Mozilla’s values. Tiles are our first iteration on starting to solve these problems.  They create a more useful, attractive and dynamic new tab page.  Tiles also represent an important part of our efforts to create new communications, content and advertising experiences over which Firefox users maintain control. Partnering with Mozilla We’re very excited to have partnered with CVS Health (and Mindshare/GroupM) in the United States and Booking.com globally as our first two Firefox sponsored Tiles partners.  We are live in 8 languages and 25 different countries*, and will continue to iterate with Mindshare/GroupM and Booking.com, as well as with our community, as we continue to improve on the experience. We have been delighted to work with Mindshare/GroupM and Booking.com.  When we collaborate, we need to understand the vision and objectives of the partner, and to understand if that partner is able to work within the framework of Mozilla’s principles.  Running sponsored content in Tiles is results-based, not surveillance-based. We do not allow tracking beacons or code in Tiles. We are not collecting, or providing you with, the data about our audience that most digital ad networks do.  There are certain categories that require screening, or have other sensitivities, that we will stay away from, such as alcohol and pharmaceuticals. The user’s experience For users with no browsing history (typically a new installation), they will see Directory Tiles offering an updated, interactive design and suggesting useful sites.  A separate feature, Enhanced Tiles, will improve upon the existing new tab page experience for users who already have a history in their browser. Tiles provides Mozilla (including our local communities) new ways to interact with and communicate with our users.  (If you’ve been using a pre-release Firefox build, you might have seen promotions for Citizenfour, a documentary about Edward Snowden and the NSA, appearing in your new tab in the past few weeks.) Tiles also offers Mozilla new partnership opportunities with advertisers and publishers all while respecting and protecting our users. These sponsorships serve several important goals simultaneously by balancing the benefits to users of improved experience, control and choice, with sustainability for Mozilla. What users currently see in the New:Tab page on Firefox desktop will continue to evolve, just like any digital product would.  And it will evolve along the lines I discussed earlier here. Above all, we need to earn and maintain users’ trust. Looking ahead User control and transparency are embedded in all of our design and architecture, and principles that we seek to deliver our users throughout their online life: trust is something that you earn every day.  The Tiles-related user data we collect is anonymized after we receive it – as it is for other parts of Firefox that we instrument to ensure a good experience.  And of course, a user can simply switch the new tab page Tiles feature off.   One thing I must note: users of ad blocking add-ons such as Ad Block Plus will see adverts by default and will need to switch Tiles off in Firefox if they wish to see no ads in their New Tab page.  You can read more about how we design for trust here. With the testing we’ve done, we’re satisfied that users will find this an experience that they understand and trust – but we will always have that as a development objective.   You can expect us to iterate frequently, but we will never assume trust – we will always work to earn it.  And if we do have and maintain that trust, we can create potentially the best digital advertising medium on the planet. We believe that we can do this, and offer a better way to deliver and to receive adverts that users find useful and relevant.  And we also believe that this is a great opportunity for advertisers who share our vision, and who wish to reach their audience in a way that respects them and their trust.  If that’s you, we want to hear from you.  Feel free to reach out to [email protected]. And a big thank you to our initial launch partners, CVS Health, Booking.com, and Citizenfour  who see our vision and are supporting Mozilla to have greater impact in the world. * that list in full: Argentina, Australia, Austria, Belarus, Belgium, Brazil, Canada, Chile, Colombia, Ecuador, France, Germany, Hong Kong, Japan, Kazakhstan, Mexico, New Zealand, Peru, Russia, Saudi Arabia, Spain, Switzerland, United Kingdom, United States and Venezuela. [Less]
Posted over 9 years ago by [email protected] (Robert)
About seven years ago we implemented "full zoom" in Firefox 3. An old blog post gives some technical background to the architectural changes enabling that feature. When we first implemented it, I expected non-integer scale factors would never work as ... [More] well as scaling by integer multiples. Apple apparently thinks the same, since (I have been told) on Mac and iOS, application rendering is always to a buffer whose size is an integer multiple of the "logical window size". GNOME developers apparently also agree since their org.gnome.desktop.interface scaling-factor setting only accepts integers. Note that here I'm conflating user-initiated "full zoom" with application-initiated "high-DPI rendering"; technically, they're the same problem. Several years of experience shows I was wrong, at least for the Web. Non-integer scale factors work just as well as integer scale factors. For implementation reasons we restrict scale factors to 60/N for positive integers N, but in practice this gives you a good range of useful values. There are some subtleties to implementing scaling well, some of which are Web/CSS specific. For example, normally we snap absolute scaled logical coordinates to screen pixels at rendering time, to ensure rounding error does not accumulate; if the distance between two logical points is N logical units, then the distance between the rendered points stays within one screen pixel of the ideal distance S*N (where S is the scale factor). The downside is that a visual distance of N logical units may be rendered in some places as ceil(S*N) screen pixels and elsewhere as floor(S*N) pixels. Such inconsistency usually doesn't matter much but for CSS borders (and usually not other CSS drawing!), such inconsistent widths are jarring. So for CSS borders (and only CSS borders!) we round each border width to screen pixels at layout time, ensuring borders with the same logical width always get the same screen pixel width. I'm willing to declare victory in this area. Bug reports about unsatisfactory scaling of Web page layouts are very rare and aren't specific to non-integral scale factors. Image scaling remains hard; the performance vs quality tradeoffs are difficult --- but integral scale factors are no easier to handle than non-integral. It may be that non-Web contexts are somehow more inimical to non-integral scaling, though I can't think why that would be. [Less]
Posted over 9 years ago by michal berman
How much space do you have in your life? How much of it is filled with obligations you’ve accepted for others? What percentage do you give to yourself for pursuits that are just for you? Could you stand to make a little more space in your life just ... [More] for you? Could less commitments mean more quality time to focus on your Vision for your life?   I asked myself these questions recently and discovered that my life was almost entirely filled with obligations, guilt and hardly any room to pursue my own passions, or time to rejuvenate. One of my core values is to be of service to people - I love being helpful, creating value, offering support and transformation as a coach. Because of this, I felt obligated to serve my clients whenever they needed, whether it cut in to personal commitments or not . I would feel guilty if I wasn’t able to be available for them. It became that I was always putting everyone else first, and I was way at the bottom of the list —or sometimes I never made it on to the list at all. When I noticed how busyness had taken over, I devised what I call “the Mason Jar Exercise”. I needed to create space and reconnect with Vision and Purpose and, have time to contemplate what’s next so that I can choose actions that support my next big chapter. Many of the folks I coach have similar challenges. They feel overwhelmed, have no time for themselves and feel so busy in their lives they’ve lost their way, lost their deep connection to their vision and purpose.Do you find that any of what I’m talking about applies to you?? Can you check in with how your life is resonating with you, how much you feel in tune with your values in all of your pursuits, supporting roles and actions?   If you find, like so many of us, you could use more space and time in your life, try out the Mason Jar exercise for yourself. Here’s how:   The Mason Jar Exercise for Spaciousness, Choice and Resonance   1.    Take a Mason Jar and fill it with gravel (or any other small rocks) in proportion to how full you feel your life is out of integrity with your values. It’s a great way to visualize the things taking up space in your life, I was at about 85%. 2.    Fill the remaining space with some of your favourite things e.g. momentoes, shells, etc. 3.    As you stare at this now full, or even over full jar, contemplate what you want for your life and write out your ‘stake’—the mantra or statement that propels you forward in integrity. For me it was “with laughter and steadfast focus I open the door to my new way”. 4.    Each day choose a stone to ‘throw back into nature’. In my case, I would declare my stake, open my front door and toss it into our forest. Then you must challenge yourself in the day to let go of or get rid of an obligation or choice you’ve made that’s out of line with your values until the jar is empty save for your few favourite things. For example, ‘Today I am ridding myself of the obligation to ___ and this is making room for me to embrace ___". Now you can begin a study of the space, the singular beauty and significance of the items that remain. Notice what you want more of and where the draw is to fill it back up again.   This was a powerful and transformative exercise for me. It took me about a month to complete. I noticed how full my life was in activity and distraction and that I felt out of integrity with my values and that I wasn’t making resonant choices. The result was a beautiful appreciation for my life and a deep acknowledgement of how I want to live. Less has become more. Stillness has expanded my perspectives. Now I feel like I can take on my next mountain.   What are you noticing? [Less]
Posted over 9 years ago by Roberto
We are currently evaluating possible replacements for our Telemetry map-reduce infrastructure. As our current data munging machinery isn’t distributed, analyzing days worth of data can be quite a pain. Also, many algorithms can’t easily be expressed ... [More] with a simple map/reduce interface. So I decided to give Spark another try. “Another” because I have played with it in the past but I didn’t feel it was mature enough to be run in production. And I wasn’t the only one to think that apparently. I feel like things have changed though with the latest 1.1 release and I want to share my joy with you. What is Spark? In a nutshell, “Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala and Python, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming.” Spark primary abstraction is the Resilient Distributed Dataset (RDD), which one can imagine as a distributed pandas or R data frame. The RDD API comes with all kinds of distributed operations, among which also our dear map and reduce. Many RDD operations accept user-defined Scala or Python functions as input which allow average Joe to write distributed applications like a pro. A RDD can also be converted to a local Scala/Python data structure, assuming the dataset is small enough to fit in memory. The idea is that once you chopped the data you are not interested in, what you are left with fits comfortably on a single machine. Oh and did I mention that you can issue Spark queries directly from a Scala REPL? That’s great for performing exploratory data analyses. The greatest strength of Spark though is the ability to cache RDDs in memory. This allows you to run iterative algorithms up to 100x faster than using the typical Hadoop based map-reduce framework! It has to be remarked though that this feature is purely optional. Spark works flawlessly without caching, albeit slower. In fact in a recent benchmark Spark was able to sort 1PB of data 3X faster using 10X fewer machines than Hadoop, without using the in-memory cache. Setup A Spark cluster runs can run in standalone mode or on top of YARN or Mesos. To the very least to run a cluster you will need some sort of distributed filesystem, e.g. HDFS or NFS. But the easiest way to play with it though is to run Spark locally, i.e. on OSX: brew install spark spark-shell --master "local[*]" The above commands start a Scala shell with a local Spark context. If you are more inclined to run a real cluster, the easiest way to get you going is to launch an EMR cluster on AWS: aws emr create-cluster --name SparkCluster --ami-version 3.3 --instance-type m3.xlarge \ --instance-count 5 --ec2-attributes KeyName=vitillo --applications Name=Hive \ --bootstrap-actions Path=s3://support.elasticmapreduce/spark/install-spark Then, once connected to the master node, launch Spark on YARN: yarn-client /home/hadoop/spark/bin/spark-shell --num-executors 4 --executor-cores 8 \ --executor-memory 8g —driver-memory 8g The parameters of the executors (aka worker nodes) should obviously be tailored to the kind of instances you launched. It’s imperative to spend some time understanding and tuning the configuration options as Spark doesn’t automagically do it for you. Now what? Time for some real code. Since Spark makes it so easy to write distributed analyses, the bar for a Hello World application should be consequently be much higher. Let’s write then a simple, albeit functional, Recommender Engine for Firefox add-ons. In order to do that, let’s first go over quickly the math involved. It turns out that given a matrix of the rankings of each user for each add-on, the problem of finding a good recommendation can be reduced to matrix factorization problem: The model maps both users and add-ons to a joint latent factor space of dimensionality . Both users and add-ons are thus seen as vectors in that space. The factors express latent characteristics of add-ons, e.g. if a an add-on is related to security or to UI customization. The ratings are then modeled as inner products in that space, which is proportional to the angle of the two vectors. The closer the characteristics of an add-on align to the preferences of the user in the latent factor space, the higher the rating. But wait, Firefox users don’t really rate add-ons. In fact the only information we have in Telemetry is binary: either a user has a certain add-on installed or he hasn’t. Let’s assume that if someone has a certain add-on installed, he probably likes that add-on. That’s not true in all cases and a more significant metric like “usage time” or similar should be used. I am not going to delve into the details, but having binary ratings changes the underlying model slightly from the conceptual one we have just seen. The interested reader should read this paper. Mllib, a machine learning library for Spark, comes out of the box with a distributed implementation of ALS which implements the factorization. Implementation Now that we have an idea of the theory, let’s have a look at how the implementation looks like in practice. Let’s start by initializing Spark: val conf = new SparkConf().setAppName("AddonRecommender") val sc = new SparkContext(conf) As the ALS algorithm requires tuples of (user, addon, rating), let’s munge the data into place: val ratings = sc.textFile("s3://mreid-test-src/split/").map(raw => {   val parsedPing = parse(raw.substring(37))   (parsedPing \ "clientID", parsedPing \ "addonDetails" \ "XPI") }).filter{   // Remove sessions with missing id or add-on list   case (JNothing, _) => false   case (_, JNothing) => false   case (_, JObject(List())) => false   case _ => true }.map{ case (id, xpi) => {   val addonList = xpi.children.     map(addon => addon \ "name").     filter(addon => addon != JNothing && addon != JString("Default"))   (id, addonList) }}.filter{ case (id, addonList) => {   // Remove sessions with empty add-on lists   addonList != List() }}.flatMap{ case (id, addonList) => {   // Create add-on ratings for each user   addonList.map(addon => (id.extract[String], addon.extract[String], 1.0)) }} Here we extract the add-on related data from our json Telemetry pings and filter out missing or invalid data. The ratings variable is a RDD and as you can see we used the distributed map, filter and flatMap operations on it. In fact it’s hard to tell apart vanilla Scala code from the distributed one. As the current ALS implementation doesn’t accept strings for the user and add-on representations, we will have to convert them to numeric ones. A quick and dirty way of doing that is to hash the strings: // Positive hash function def hash(x: String) = x.hashCode & 0x7FFFFF val hashedRatings = ratings.map{ case(u, a, r) => (hash(u), hash(a), r) }.cache val addonIDs = ratings.map(_._2).distinct.map(addon => (hash(addon), addon)).cache We are nearly there. To avoid overfitting, ALS uses regularization, the strength of which is determined by a parameter . As we don’t know beforehand the optimal value of the parameter, we can try to find it by minimizing the mean squared error over a pre-defined grid of values using k-fold cross-validation. // Use cross validation to find the optimal number of latent factors val folds = MLUtils.kFold(hashedRatings, 10, 42) val lambdas = List(0.1, 0.2, 0.3, 0.4, 0.5) val iterations = 10 val factors = 100 // use as many factors as computationally possible val factorErrors = lambdas.flatMap(lambda => {   folds.map{ case(train, test) =>     val model = ALS.trainImplicit(train.map{ case(u, a, r) => Rating(u, a, r) }, factors, iterations, lambda, 1.0)     val usersAddons = test.map{ case (u, a, r) => (u, a) }     val predictions = model.predict(usersAddons).map{ case Rating(u, a, r) => ((u, a), r) }     val ratesAndPreds = test.map{ case (u, a, r) => ((u, a), r) }.join(predictions)     val rmse = sqrt(ratesAndPreds.map { case ((u, a), (r1, r2)) =>       val err = (r1 - r2)       err * err     }.mean)     (model, lambda, rmse)   } }).groupBy(_._2)   .map{ case(k, v) => (k, v.map(_._3).reduce(_ + _) / v.length) } Finally, it’s just a matter of training ALS on the whole dataset with the optimal value and we are good to go to use the recommender: // Train model with optimal number of factors on all available data val model = ALS.trainImplicit(hashedRatings.map{case(u, a, r) => Rating(u, a, r)}, factors, iterations, optimalLambda._1, 1.0) def recommend(userID: Int) = {   val predictions = model.predict(addonIDs.map(addonID => (userID, addonID._1)))   val top = predictions.top(10)(Ordering.by[Rating,Double](_.rating))   top.map(r => (addonIDs.lookup(r.product)(0), r.rating)) } recommend(hash("UUID...")) I omitted some details but you can find the complete source on my github repository. To submit the packaged job to YARN run: spark-submit --class AddonRecommender --master yarn-client --num-executors 4 \ --executor-cores 8 --executor-memory 8g addon-recommender_2.10-1.0.jar So what? Question is, how well does it perform? The mean squared error isn’t really telling us much so let’s take some fictional user session and see what the recommender spits out. For user A that has only the add-on Ghostery installed, the top recommendations are, in order: NoScript Web of Trust Symantec Vulnerability Protection Better Privacy LastPass DuckDuckGo Plus HTTPS-Everywhere Lightbeam Google Translator for Firefox One could argue that 1 out of 10 recommendations isn’t appropriate for a security aficionado. Now it’s the turn of user B who has only the Firebug add-on installed: Web Developer FiddlerHook Greasemonkey ColorZilla User Agent Switcher McAfee RealPlayer Browser Record Plugin FirePHP Session Manager There are just a couple of add-ons that don’t look that great but the rest could fit the profile of a developer. Now, considering that the recommender was trained only on a couple of days of data for Nightly, I feel like the result could easily be improved with more data and tuning, like filtering out known Antivirus, malware and bloatware. [Less]
Posted over 9 years ago by [email protected] (Kim Moir)
There was a very interesting release engineering summit this Monday held in concert with LISA in Seattle.  I was supposed fly there this past weekend so I could give a talk on Monday but late last week I became ill and was unable to go.   Which was ... [More] very disappointing because the summit looked really great and I was looking forward to meeting the other release engineers and learning about the challenges they face.Scale in the Market  ©Clint Mickel, Creative Commons by-nc-sa 2.0Although I didn't have the opportunity to give the talk in person, the slides for it are available on slideshare and my mozilla people account   The talk describes how we scaled our continuous integration infrastructure on AWS to handle double the amount of pushes it handled in early 2013, all while reducing our AWS monthly bill by 2/3.Cost per push from Oct 2012 until Oct 2014. This does not include costs for on premise equipment. It reflects our monthly AWS bill divided by the number of monthly pushes (commits).  The chart reflects costs from October 2012-2014.Thank you to Dinah McNutt and the other program committee members for organizing this summit.  I look forward to watching the talks once they are online. [Less]
Posted over 9 years ago by [email protected] (Kim Moir)
Here's the October 2014 monthly analysis of the pushes to our Mozilla development trees.  You can load the data as an HTML page or as a json file.TrendsWe didn't have a record breaking month in terms of the number of pushes, however we did have a ... [More] daily record on October 18 with 715 pushes.  Highlights12821 pushes, up slightly from the previous month414 pushes/day (average)Highest number of pushes/day: 715 pushes on October 822.5 pushes/hour (average)General RemarksTry keeps had around 39% of all the pushes, and gaia-try has about 31%. The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 21% of all the pushesRecordsAugust 2014 was the month with most pushes (13,090  pushes)August 2014 has the highest pushes/day average with 422 pushes/dayJuly 2014 has the highest average of "pushes-per-hour" with 23.51 pushes/hourOctober 8, 2014 had the highest number of pushes in one day with 715 pushes [Less]
Posted over 9 years ago by Rosana
We’re very proud to announce that we have a new Reps Dashboard that lists your action items! You will be able to find most of the action items that you have in the Reps program, it will help you organize your activities and plan better your time. ... [More] We’re also hoping that this will help mentors and council members manage the work load, be able to prioritize and ultimately keep the program running smoothly. Check out the dashboard and let us know your thoughts. We know there might be some improvements to be done, so your feedback will help us figure this out. This dashboard comes at the perfect time, we have a first mission for ALL Reps and the dashboard will allow you to do this in no time. We want to understand the impact of the Reps program in 2014 so we are asking all of you to please update ALL the post event metrics for this year. It won’t take much time and at the end you’ll help us articulate better the impact that we’re having with the Reps program. We are introducing text fields, so that you can add important links to your post events metrics. So any links to the makes created, the press articles generated, the social media impact would be of great help! Help us understand how much reach your event had. How many people attended, how many people did you reach on social media, how many through press articles or blog posts? Let’s work together on making the impact of our work understandable. We have so much to be proud of, let’s document it! [Less]
Posted over 9 years ago by Daniel Stenberg
A while ago I wrote about my hunt for a new keyboard, and in my follow-up conversations with friends around that subject I quickly came to the conclusion I should get myself better analysis and data on how I actually use a keyboard and the individual ... [More] keys on it. And if you know me, you know I like (useless) statistics. So, I tried out the popular and widely used Linux key-logger software ‘logkeys‘ and immediately figured out that it doesn’t really support the precision and detail level I wanted so I forked the project and modified the code to work the way I want it: keyfreq was born. Code on github. (I forked it because I couldn’t find any way to send back my modifications to the upstream project, I don’t really feel a need for another project.) Then I fired up the logging process and it has been running in the background for a while now, logging every key stroke with a time stamp. Counting key frequency and how it gets distributed very quickly turns into basically seeing when I’m active in front of the computer and it also gave me thoughts around what a high key frequency actually means in terms of activity and productivity. Does a really high key frequency really mean that I was working intensely or isn’t that purpose more a sign of mail sending time? When I debug problems or research details, won’t those periods result in slower key activity? In the end I guess that over time, the key frequency chart basically says that if I have pressed a lot of keys during a period, I was working on something then. Hours or days with a very low average key frequency are probably times when I don’t work as much. The weekend key frequency is bound to be slightly wrong due to me sometimes doing weekend hacking on other computers where I don’t log the keys since my results are recorded from a single specific keyboard only. Conclusions So what did I learn? Here are some conclusions and results from 1276614 keystrokes done over a period of the most recent 52 calendar days. I have a 105-key keyboard, but during this period I only pressed 90 unique keys. Out of the 90 keys I pressed, 3 were pressed more than 5% of the time – each. In fact, those 3 keys are more than 20% of all keystrokes. Those keys are: <Space>, <Backspace> and the letter ‘e’. <Space> stands out from all the rest as it has been used more than 10%. Only 29 keys were used more than 1% of the presses, giving this a really long tail with lots of keys hardly ever used. Over this logged time, I have registered key strokes during 46% of all hours. Counting only the hours in which I actually used the keyboard, the average number of key strokes were 2185/hour, 36 keys/minute. The average week day (excluding weekend days), I registered 32486 key presses. The most active sinngle minute during this logging period, I hit 405 keys. The most active single hour I managed to do 7937 key presses. During weekends my activity is much lower, and then I average at 5778 keys/day (7.2% of all activity were weekends). When counting most active hours over the day, there are 14 hours that have more than 1% activity and there are 5 with less than 1%, leaving 5 hours with no keyboard activity at all (02:00- 06:59). Interestingly, the hour between 23-24 at night is the single most busy hour for me, with 12.5% of all keypresses during the period. Random “anecdotes” Longest contiguous time without keys: 26.4 hours Longest key sequence without backspace: 946 There are 7 keys I only pressed once during this period; 4 of them are on the numerical keypad and the other three are F10, F3 and <Pause>. More I’ll try to keep the logging going and see if things change over time or if there later might end up things that can be seen in the data when looked over a longer period. [Less]
Posted over 9 years ago by Karl Dubost
Testing in an isolated environment When testing a Web Compatibility issue, many things can interfere with your testing. The most neutral environment will help to identify the issue. For example, ads blockers, user stylesheets, etc will lead to sites ... [More] malfunctionning or will create a false sense of a site working when it is not. Basically, we need to start the testing with an empty clean profile of a browser. As you do not want to have to restart your main browser all the time, you need to setup a different profile. So you have your normal browser and your webcompat browser side by side. Choosing your Web compatibility testing browser I will explain below my own configuration but you may adjust to your own specific needs. I'm running Aurora Firefox Developer Edition as my main browser. Profile Manager and Webcompat profile You will need to create an additional profile. Follow the steps on using the Firefox profile manager. When the profile manager window opens, choose "Create Profile…" and name it webcompat (or the name of your choice). Quit the profile manager. Now you can restart your normal browser (Developer Edition for me), the profile manager will automatically pop up a window, you will select default for example. Then click on your test browser (normal Firefox for me) and select this time webcompat profile. Finishing the Webcompat profile We said that we wanted a browser that each time we were starting it is clean of any interactions with other environments, be present or past. Go to the Firefox Preferences and follow these steps: General: When Firefox starts: Select Show a blank page Privacy: History Select Use custom settings for history and configure like this screenshot below: Clear history when Firefox closes and choose the "Settings…" and select all the options. The only add-on I have installed is User Agent Switcher for testing by faking the User Agent string of mobile devices or other browsers on Firefox. Restart your test browser one more time. You are now in a clean profile mode. Each time you want to test something new or you are afraid that your previous actions have created interferences, just restart the browser. You will also notice how fast the browser is without all the accumulated history. Enjoy testing and analyze Web Compatibility bugs Otsukare. [Less]
Posted over 9 years ago by Denelle Dixon-Thayer
Mozilla’s commitment to transparency about our data practices is a core part of our identity. We recognized the value in giving a clear voice to this commitment through a set of Privacy Principles that we developed in 2010. These Principles, … Continue reading