UPDATE: Scroll down for update on May 26, 2013.
Since beginning work on the Firefox OS project, the number one question I’m asked is “Does it run on my phone?”. Sadly, the answer for almost everyone is “no”. The question itself is interesting though, and shows how people – even geeky technical people – don’t have a good understanding of how mobile devices work, nor the whole business and technical ecosystem that brings these things into the hands of consumers (hm, maybe that’ll be my next blog post). Porting an operating system to a device is tricky work in the best of circumstances and when done without the direct assistance of the various business entities involved in the stack for any single device (OEM, chipset manufacturer, original OS vendor), involves a lot of, well, fiddling around. The kind of fiddling around that voids warranties and turns $600 hardware into a paperweight. The success and hackability of Android simplified things a lot, creating a relatively large community of people doing OS-to-device porting, and enabling a lot of what allowed Firefox OS to bootstrap so quickly. However, it’s still not easy.
I was curious about who is playing around with Firefox OS in the dark corners of the Mos Eisley of the device-porting porting world,the XDA-Developers forums. Below, I’ve listed a number of threads involving efforts to port Firefox OS to various devices. Some have builds, some are aborted attempts, but the totality shows the level of interest in putting a truly open Web operating system on low-powered commodity mobile hardware that is very exciting.
Oh, and if you’re interested in porting Firefox OS to your device, the basic instructions to get started are on the MDN B2G Porting Guide. If you scan any of the threads below or have ever tried doing this kind of work before, you already know: Thar be dragons. You could lose your device and your sanity, and will very likely void the warranty. Consider yourself warned.
- Samsung Epic 4G Touch, Samsung d710 (and some code on Github)
- HTC Wildfire S
- HTC Sensation – Some talk of debugging the porting process, and links to other ports such as Razr, Ascend g300.
- Samsung Galaxy Gio
- HTC Jewel (EVO + LTE)
- Samsung Galaxy Nexus
- LG Optimus 2X
- Samsung Nexus S & Nexus S 4G – thread 1, thread 2
- HTC Dream/G1 (old skool!)
- Samsung Galaxy S III
There are also some efforts at porting to other types of devices, such as Oleg Romashin’s experiments with Firefox on Raspberry Pi, MDN instructions for building for Pandaboard, and a bug for some core changes to Firefox OS to ease porting to basic Linux systems like Beagleboard and Chromebox.
UPDATE May 26, 2013
New devices since this was originally posted, and some fantastic updates:
- Raspberry Pi! Philipp Wagner, a student from Germany, updated Oleg Romashkin’s porting work, and wrote the Raspberry Pi for Firefox OS guide, where you can download builds and read instructions for building it yourself. I installed his build and was up and running on my Raspberry Pi in minutes. Check out his blog post, and buy him something from his Amazon wishlist
- As I’m sure you’ve heard by now, Geeksphone has two devices available with Firefox OS pre-installed now available for purchase. The devices keep selling out FAST, so keep a watch on their Twitter account. Also, according to their forums they’re going to make nightly updates available over-the-air soon, so you can stay on the latest versions of Firefox OS.
- At this year’s Mobile World Congress, Sony released ROMs for the Experia E.
- The XDA-Developers blog reported on a Firefox OS port for the HTC HD2.
- Also on the XDA-Developers blog was a port for the HTC Explorer (aka Pico).
- You can find build instructions for running Firefox OS on Pandaboard up on the Mozilla Developer Network.
A couple of other notes:
- XDA-Developers now has a Firefox OS forum, where there are lots of threads on the porting process, individual devices, and app development.
- All XDA-Developers blog posts tagged Firefox OS.
We quietly shipped Firefox 11 with a bunch of performance fixes that both reduce the amount of memory that Firefox uses, and improve the responsiveness of it’s user interface.
These types of changes are not easy to talk about. Often they’re very technical, and meaningless to anyone but the developers involved, which is probably why we usually don’t enumerate them in the the release notes or other public announcements. “Firefox is 74% faster when you open menu X, and twice as fast in some garbage collection scenarios!” Yeah, not an eye-popping headline. We could do a lot better in communicating these improvements in broadly meaningful ways though – nice graphs or some competitive site like arewefastyet would help a lot. But until then, here’s a short summary of the improvements in Firefox 11. And if you know of other performance fixes that don’t fall into the categories below, please add them in the comments!
Memory Use (aka “memshrink”)
The Memshrink project has been going for quite a while now, led by Nicholas Nethercote. He blogs weekly updates on the project’s activity. According to Bugzilla, there were 29 memshrink bugs marked fixed during the Firefox 11 development cycle – four of which were P1, or very high priority. Some of the fixes were related to tools and detection methods, but many are actual reductions in memory use. The changes that made it into Firefox 11 include fixes for detected leaks, removing of zombie compartments, lazy-loading data, reducing the size of some caches, reducing memory used while scrolling, and many more.
UI Responsiveness (aka “snappy”)
The Snappy project started last December, and is run by Taras Glek. Its aim is to improve the responsiveness of the Firefox UI. Taras has been posting weekly updates on Snappy activity on his blog. Bugzilla shows 15 snappy bugs marked fixed during the Firefox 11 development cycle. The project had just started, but there are still some significant wins in this release! Firefox 11 includes reductions in queries in the bookmarks system, reduced preference lookups, faster data collection for session restore, and various improvements in the DOM code.
While it’s not related to performance, I do want to call attention to something that many people don’t seem to know: In Firefox 10 (yes, the previous release) we stopped marking most add-ons incompatible when you upgrade. That means that a LOT more of your add-ons will continue to work when you upgrade Firefox from here on out. The only add-ons that still require compatibility bumps are those that have binary components, since they need to be recompiled against the current version.
My team is “a riddle, wrapped in a mystery, inside an enigma”. Ok, maybe not *that* mysterious, but we’re definitely involved in a variety of projects. Here’s a roll-up of a week in browser-land.
- Tim landed a patch that adds reporting of leaked windows and documents to our unit test runs. Dao’s work has shown that these often reflect leaks in the platform not just the test, so it’s very valuable to know about them. The next step is to track the leak numbers between runs and report regressions by making the tree color change somehow. Tim also blogged about this.
- Neil has a patch up for adding telemetry that measures how long it takes to open a browser window.
- Mano continued his work on the Safari migrator, which includes various cross-migrator improvements. This work is part of a broader rewrite of the migrators to use Places async APIs, for improved responsiveness.
- Marco landed a major rewrite of Livemarks, converting them to load content on-demand and asynchronously, reducing synchronous IO on the main thread.
- Paolo continued his work on rewriting favicon API consumers to use the new asynchronous API.
- Drew has completed the conversion of the content preferences service to use async mozStorage statements, and is in the process of updating the various JS and C++ consumers to use the new API.
- I talked a bit with Mark Cote about Peptest, a new framework for testing browser responsiveness. Peptest is currently available on the tryserver, but results are highly variable, so there’s more work to do before it can be turned on for all check-ins. Once Peptest is ready to use, we’ll evangelize the crap out of it.
- I continued working on breaking apart the session restore service into more manageable pieces (XPath generator, tab state, form data).
- I put up a WIP patch for reading the session file asynchronously during startup.
- I started working on measuring how much time is taken up by creating about:blank browsers when restoring sessions on demand.
New Tab Page
- Tim worked on Stephen Horlander’s redesign of the new-tab page, and also fixed some bugs.
New Download Manager
- Paolo consolidated existing patches into a single one for final pass and check-in. Marco has been reviewing.
- Paolo and Marco are getting together in Novara on Saturday to sprint on fixing theme bugs and other cleanup required for landing.
Add-on SDK Integration
- Met with Irakli to create plan for landing the core SDK modules in Firefox so that the whole SDK would no longer need to be bundled with every add-on.
- Created a feature page for tracking the project.
- I talked with Brian Warner about syncing Git repos with HG repos.
Web Apps Integration
- I updated the feature page with latest from UX, Apps and PM.
- Felipe Gomes is driving the Firefox side of the integration work, with Tim helping out on UI. They’re working together with Fabrice, Myk, Dan and Tim A.
- David has the window.crypto.getRandomValues patch up for final review and superreview.
- David put up an initial patch for the navigator.id API for Web content.
- David gets to play with NSS+DOM+XPCOM to get DOMCrypt’s internal API working. Lucky guy.
- David is working with Shane Caraveo to get the new Share add-on reviewed and landed.
- Tim and Marco kicked off sponsorship process for Italy’s jsDay conference. They’ll be putting up a booth there, along with Paolo Amadini, representing Mozilla. Thanks to Stormy and Shez for the support from Developer Engagement.
Of course, we all worked on various other things as well, from code review to bug triage to random maintenance fixes. Activity logs and whatnot are listed below.
- David Dahl: Bugzilla activity log, status updates, blog, Twitter
- Drew Willcoxon: Bugzilla activity log, status updates
- Felipe Gomes: Bugzilla activity log, status updates, blog, Twitter
- Asaf Romano: Bugzilla activity log
- Marco Bonardo: Bugzilla activity log, status updates, blog, Twitter
- Neil Deakin: Bugzilla activity log, status updates, blog
- Paolo Amadini: Bugzilla activity log
- Tim Taubert: Bugzilla activity log, status updates, blog, Twitter
- Dietrich Ayala: Bugzilla activity log, status updates, blog, Twitter
We’re at the end of our Performance work-week here in Brussels, and gearing up for a two-day orgy of European open-source culture at FOSDEM. I’ve successfully acquired a cold (and hopefully not worse) due to the temperature being consistently below freezing.
However, the people here in Brussels have made up for their weather shortcomings by welcoming us wherever we go. Between the hackerspaces and co-working spaces, and the restaurants that happily take large groups with little or no notice, I’m very impressed!
The hackerspace in Brussels is located in Schaerbeek, a neighborhood to the north of the city center. The space used to be a vehicle repair garage for the city, but was given up for use by the geeks. They’ve installed serious hardware, and have fully-equipped the place with everything needed for survival. Thanks to Rafael and Patrick, for answering all our questions and helping us make mate and to find food nearby. Lunch on the second day was described by Patrick as a “little French place”, but turned out to be a hall of worship dedicated to Tintin!
Faubourg St Antoine is filled with Tintin toys, art and knick-knacks, including some alternate interpretations and even a clarification for something I’d always wondered about. Sadly, they’ve been issued a legal notice from the current copyright (or EU equivalent) holders requiring them to remove all the Tintin materials from public display
Once the temperatures dropped far below freezing, we relocated to BetaGroup Coworking Brussels in Etterbeek, to the southeast of the center. Ramon Suarez, the manager of the space was very accommodating, taking us on short notice. The wi-fi was blazing fast, the coffee was hot, and the ping-pong was a welcome break from heads-down hackery. The space itself was fantastic, with a great combination of quiet co-working areas, public spaces and private meeting offices. With tons of natural light, steel bridges and a meeting space on what looked like a submarine conning tower, it was truly impressive.
We had a wonderful lunch at a very tidy restaurant nearby.
Overall, it’s been a fun and productive week, if a bit chilly. Like, really chilly. Ridiculously so. Why do people even inhabit places that get this cold? Honestly, wtf.
The Performance team and some of the Firefox team are spending the week in Brussels, laying waste to some of the performance issues in the browser.
I’ll be uploading pics to flickr with the tag ‘perfworkweek2012′.
I am needy:
- I want to remember URLs. Bookmarking is too manual and akin to throwing URLs in the sarlacc pit. The user-interface pieces around bookmarking have not changed in a decade. No, the awesomebar is not a good tool for this. I don’t even come close to being able to recall what I want the awesomebar to recall. I need to be ambiently prompted in a way that is visual and has context.
- I need to be able to focus on a given task, project or idea. A single sea of tabs doesn’t help at all. I want blinders. I want an environment. Task immersion.
- I need to be able to categorize URLs into groups, such that the whole group is easily accessible. Trees and menus can go to hell, along with the RSI, eye-strain and visual boredom they provide.
- I need to be able to switch contexts quickly and easily. Eg: From bug triage to perf to dashboards to music, etc.
- I don’t want to leave the browser. Windows are super heavyweight feeling and come along with all kinds of operating system baggage: visual, interaction, performance, etc.
I realized recently that a pattern had emerged in my browser usage that meets a bunch of these needs:
- I set up Firefox to restore my session every time it starts. This way my groups persist, and all the URLs in each group are loaded with their cookies and other session data ready to go when I need them.
- I have “Don’t load tabs until selected” checked, so that Firefox does all this with as little memory as possible – the web pages in all the tabs in all the groups aren’t loaded until I actually use them.
- I restart the browser a couple of times per day to keep memory use slim, which in turn keeps the browser responsive. Restarting is super fast and responsive because I have “Don’t load tabs until selected” (see previous point).
This is the happiest I’ve been with any browser in years. However, there are still a bunch of pain points. I want SO much more.
- I want to tag URLs without bookmarking them. The bookmark concept just gets in the way. It’s an unnecessary unit of psychological weight. It’s a vestigial metaphor of days gone by.
- I want to open a tab group by typing the name of the group in the URL bar.
- I want to add URLs to multiple groups easily, similar to tagging. I’d like to do it via the keyboard.
- I want to send the current tab to a different group (or groups) using only the keyboard.
- I want app tabs that are specific to a given group, and some that are global.
- I want to switch quickly from an app tab back to the last non-app tab I was at. Or be able to peek quickly at an app tab without losing my context in the current set of tabs.
- I want to switch quickly back to the last tab I was at. (Eg: When I open a new tab, and get sent to the end of the current set of tabs). OR be able to have new tabs open immediately to the right of the current tab, with linked parentage.
- I’m tired of sites being browsers inside a browser. And I don’t want “site-specific” browsers – I want a “me-specific” browser, for local or dynamic content.
- Firefox creates the <tab> elements for hidden tabs when restoring the session. It would re/start even faster and use even less memory if the XUL elements for hidden tabs were not created until the group itself was opened.
- As I work, memory use increases and responsiveness decreases, since I keep visiting more and more tabs. If I haven’t visited a tab in a while, Firefox should unload it. If I haven’t visited a group in a while, Firefox should completely unload the whole group, session content *and* XUL elements.
- A downside of the “Don’t load tabs until selected” option is that tab content is not ready and waiting when you select the tab. The web content has to load and the session data for the tab must be restored. Firefox should pre-load tabs that are adjacent to the active tab. This feature, combined with the dormant-izing of tabs described above would result in a decent balance of instant-gratification availability and responsiveness and resource frugality.
One idea I had was a merging of tagging and groups: The groups in Panorama would be comprised of the set of tags that exist. This would result in nice integration bits like search-by-tag in the awesomebar being equivalent to search-in-group. It also might mean that we’ll need to make Panorama “bigger” – maybe allow it to be zoomed, or make it an infinite canvas.
An idea for navigating dynamic content is to merge feeds and groups. Imagine you have a BBC group, which has the BBC feed as it’s source. The set of “tabs” in that group are the items in the feed. If you open the group, all the URLs in the feed are loaded into tabs (but not *really* loaded if you restore-on-demand).
Anyways, it’s interesting to think about how to prototype some of these ideas in an add-on or a collection of them. I’m sure some of the items above already exist as add-ons.
I realize that I’m not a “typical user”. However, after almost 6 years of browser-making, I’m pretty damn sure that there is no such person. I do not believe that the one-size-fits-all browser is the future. When adding a feature or fixing a bug, we shouldn’t have to choose between grandma and the geeks. In order to stay relevant in a highly-personalized future, we should strive to ensure that Firefox is pliable enough that we who make it are not restricted by it, and more importantly we must ensure that add-on developers are free to mash-up and remix and experiment the f*ck out of it.
This year we started talking about ways to improve the methods in which develop Firefox features. This is a snapshot of where we’re at, and what’s coming next.
We began with some conversations about current problems regarding coordination, speed, contribution, regressions, ease of development: my initial dev.planning post, Joe Walker’s post ‘How to Eat an Elephant’, my blog post, dev.planning add-on bundling post.
There was general recognition that the Firefox development model is not agile nor flexible enough to meet the needs of our 2012 goals. We need to be able to ship features that are developed mostly outside of what’s considered the “core” Firefox team. We need to better support the use of external code repositories and bug tracking systems. We need to support features written using the Add-on SDK, both inside and bundled with Firefox.
Here’s where we’re at:
- A group of people formed to start discussing how to make these things happen: the Jetpack Features group. We meet every other Friday (meeting details and notes).
- We have a project branch (Cedar) where we can experiment and do performance testing.
- We corralled the Firefox and Toolkit module owners and the leads of the SDK project in a room to orchestrate the mechanics of landing the SDK inside Firefox itself (notes).
- L10n: The Jetpack team are hard at work on closing this gap with both localized strings in script and localization of HTML content inside add-ons.
- Performance: Jeff Griffiths blogged recently about the performance impact of the core SDK runtime on startup time. More performance tests are coming.
- Feature bundling: The BrowserID feature will likely ship in Firefox as a bundled add-on. There are still open questions though: Is it uninstallable? Does it even show up in the Add-ons Manager? Can it upgrade outside of the Firefox dev cycle? More work to do here.
- Automation: We have the Jetpack unit tests being run with every Mozilla-central check-in (though hidden by default currently). We have the SDK unit tests running on every SDK check-in for both Nightly and Aurora (tree). We still need the Mozilla-central unit tests run on every SDK check-in (that’ll come with integrating the SDK into Firefox), and performance tests run for Mozilla-central and SDK check-ins.
What’s coming up in 2012:
- Completing the picture of addon-as-feature integration into the Moz-ecosystem: l10n, performance and unit test automation.
- Ship BrowserID as a bundled add-on.
- Ship the SDK inside Firefox (the relevant parts, anyway).
- Ship Web Apps as a bundled add-on, if it makes sense to do so.
A friend recently asked me to take a look at his resume. After reading countless resumes and doing hundreds of interviews for Mozilla over the last 5+ years (whoa, almost 6 now!), a bunch of stuff came jumbling out. With his permission, I’ve copied the feedback here. Maybe it’ll be of use to someone looking for a manager’s eye on their resume. Most of the comments make sense on their own, so I didn’t copy the resume itself here.
Hey! Overall, looks good – you look like a solid candidate for a Java programming gig, with some potential for leadership, but not interested in it. Probably a senior coding member on a team (given the # of years experience) or a lone-wolf programmer that a manager could hand a project off to and expect you to run with it.
A couple of thoughts below with my critical hat on, ranging from grammar to technical. Please take with many grains of salt – obviously I’m not representative of all hiring managers. Also, I didn’t have the context about whether this is tailored for a specific position, or if you are just getting ready to hit the market. Is the resume tailored for your ideal job, or the job you’d settle for? Hard to evaluate without that context.
- My biggest criticism is that your personality doesn’t come through, only your technical skills. Everyone has a list of current/former employers, a list of acronyms, and *some* have a list of side projects (you get points for that in my book). The summary is your first chance to personalize what is otherwise an abstract description of a skill set, so take advantage of it. For example, you could start with “I am a lead developer” instead of “Lead developer”. Honestly, do you really talk in robot language like that in real life? No. Would you want to work with someone that did? Probably not. So, look for any opportunity to be more than just a developer of code.
- Clarify if you’re looking to lead or not, or even open to it. For me, this will contextualize my technical assessment of a candidate. Are you interested in project management? Are you interested in people management? Your resume currently says no, that you only want to code, but can work with non-programmers as necessary.
- “My main focuses are…” – is both redundant (focus implies main-ness) and misapplied (focus shouldn’t ever be used in the plural, by definition, IMO). Could say something like “My core responsibilities are…”, etc. You use focus later in a similar way in the Objective section: “expand professional focus” to include some other languages. That’s the opposite of focusing
- I’d take “SOAP/RESTful” out of the summary, is overly specific, not necessary.
- “often responsible” – Nah, you *are* responsible for that stuff. No need to lessen your impact with qualifying adjectives, especially in your resume of all places.
- The Objective section seems to say that you want more variety in your work than you currently have, and that you’d like to learn a few specific programming languages. Is that about right? It sounds like you’re bored. That isn’t bad, but that’s the impression I get. Again, what to put here really depends on what your actual objective is. Is it to get a job writing re-usable Java libs for web front-ends? Or a job writing Python scripts against big data-sets? You should decide that up front, and be as clear as possible about it. If you need two resumes, so be it.
- The bits about the actual work you did are great. Instead of just listing the thing, you explain the impact that the thing had, the long term result for the business, etc.
- If you have degrees from those college years, note them. The multiple year sequences you were at Acme College don’t provide value, could just pick one or aggregate into one period where it makes sense.
- Needs a spell-check and grammar pass (eg: devloping, importanly, colloborating)
Final thought: What questions would I ask you after reading this resume, when we did a phone screen? What are things not called out explicitly in a resume, but are part of an experienced developer’s toolbox… IOW, the stuff that shows you know not only how to write code, but to *ship* software.
- I’d probably ask a couple of technical questions of medium difficulty, just to confirm that you can talk comfortably at the level of technical knowledge described in the resume.
- I’d dig into problem scenarios like the hardest problem you had to solve, both architectural and bug-wise. What tools/patterns you used, etc.
- I’d ask about performance problems – triaging them, debugging them, what tools for profiling, potential bottlenecks, etc.
- I’d ask about your testing and validation approaches, what unit test toolkits you’ve used, what automation you’ve written.
- Would ask you to describe your launch processes – things like staging servers, failover scenarios, backups, etc.
Ok, so maybe we aren’t in the post-browser age *yet*. But we’re getting there, and quickly. Most of the “apps” I use on my phone are useless without an always-on data connection, and they communicate with their respective motherships via HTTP. We’re staring down a near-future with multiple Web-stack operating systems for both desktop and portable devices. We have server-side application platforms that look startlingly like pieces of a traditional Web client.
All of those places are the Web, so that’s where Mozilla has got to be, when and if it’s possible to do so. And between desktop Firefox, mobile Firefox, Chromeless, B2G, Pancake, Open Web Apps and the various Firefox features developed by the Labs and the Services groups, we’ve got a lot of application logic that needs to exist in various forms across those disparate environments.
Up until recently, even including most of the mobile efforts to date, we’ve had a pretty narrow idea of what constitutes Firefox: Mozilla’s browser application with a front-end built in XUL, and rendering content using Gecko, stored entirely in a single source repository.
This narrow view is insufficient given the needs of internet users and the plans we have to serve those needs in the immediate future. This has been starkly illustrated by the recent move to a native UI for mobile Firefox, projects like Pancake, and the expansion of the development of Firefox features by groups outside of the core team.
A few months ago I dumped a couple of thoughts into a thread on the mozilla.dev.planning newsgroup about these things. More than anything, that thread showed me that the broad spectrum of activity in Mozilla today makes our narrow view of Firefox a huge barrier to future success. Some people didn’t agree that there was a problem at all. Some people railed against Jetpack or Github, while admitting they’d never used either. Some people agreed that developing Firefox is slow and fragile, and pointed at the relative historical success of that approach. Disturbingly, I got a bunch of private emails thanking me for starting the conversation… what does *that* mean?! Overall though, there was a lot of agreement on this point: We need more people to be able to work on Firefox faster, and in a more heterogeneous environment.
There’s a bunch of work towards that end going on right now, both in Firefox team itself and in Mozilla generally, around lowering the barriers to contribution. Specific to Firefox core development though, one experiment in alternate approaches is the attempt to ship the BrowserID feature as a Jetpack-based add-on that is developed on Github and bundled with Firefox. There are a lot of moving parts, but the exercise is helping us figure out the up- and downsides to building features as add-ons, as well as providing performance data about the Add-on SDK. Maybe it’ll work, maybe we’ll have to re-route and patch it against the core. Maybe we’ll land somewhere in-between.
Regardless of that experiment’s outcome, I think we need to be experimenting hard with how we develop Firefox, and asking questions about the longer-term development landscape:
- Code changes currently have non-deterministic effects in the Firefox ecosystem. We have a jumble of services that stagger into existence at startup, and then race for the exit at shutdown, beating up the file-system at both ends of the application lifecycle. “Async” is a pattern, not a system – without a system, making a bunch of things asynchronous means that the application’s behavior as a whole is generally less predictable. Is there a more systematic way that we can manage the loading and unloading of core browser services?
- Calcification: Check out the “cleanup page”. There are long-despised-and-untouched pieces of our core infrastructure, such as URL classifier, importers, autocomplete, and parts of Places. Why is it so hard to change these? What are the barriers to making them better?
- Modularity: Cu.import is great in that it provides some of the benefits that we used XPCOM JS services for, but without the XPCOM. But are we using it enough? Jetpack development puts much more emphasis on modularity via a core built on CommonJS, and I’ve found it to make browser features written in Jetpack far easier to follow, debug, and contribute to. Maybe we should be putting code into modules where we’d normally add it to browser.js, or XBL widgets moreso than we are now? This could reduce our dependence on the XUL window mega-scope that we get in browser.js, which I’d argue leads to code that is easier to developer, debug, test and maintain.
- Abstracting the application logic away from XUL/XPCOM where possible, allowing for more portable code. This doesn’t make sense in a lot of places in the front-end, but in others such as sync or expiration policies or tab grouping algorithms or frecency generation, it might. These are things which could be useful across a number of different application contexts.
So where from here? There’s general agreement that the Add-on SDK needs to ship in the browser. This might help address some of the questions above. However, it won’t immediately help us share code with other Firefoxes or Mozilla projects, or make core development inherently less-fragile or our application behavior any more deterministic. And there are tools like Cu.import, which we have now, and Harmony modules, which we might have soon (can we use those in chrome?!) that could help with the modularity part.
But only some of this is about the technology – other parts are social. As I said above, some people do not agree that developing Firefox is slow and fraught with peril. Is that plain ol’ resistance to change, or just the lack of a clear alternative? And maybe we code reviewers should be more forward-looking, demanding larger refactorings instead of non-invasive surgeries. But that’s challenging when you’re constrained for time, or the regression cost of refactoring is so high that you become risk averse.
I’d love to hear your thoughts on the future of Firefox application development – especially the core Firefox team, and the people working on Firefox features in other groups or via add-ons. Myk Melez has been corralling a group to talk about feature development with the Add-on SDK specifically, but it quickly spreads into these broader issues. He’s starting a list for it, but until then there are regular meetings, details available in his dev.planning post.
The Mozilla Open Data Project is an index of all of the open APIs and data-sources available in the Mozilla project.
It’s also something that does not exist yet!
Or maybe it does, but I couldn’t find it…
Anyways, we’ve got massive amounts of data available throughout the project, from check-in logs to performance data to bugzilla APIs. However, there’s no central location that lists all of the sources that currently exist. This also means that’s it’s not easy to scan and see what’s not available that should be.
Maybe this is something we should list on a public index that already exists, like Programmable Web.
For now, I started a list here: https://wiki.mozilla.org/Modp
Please add any sources of data or public APIs that you know of to that list, or here in the comments and I’ll add them for you.
UPDATE: To clarify, this is different than the community metrics work being done by the Metrics team. But we’re talking about having this information available in the metrics portal at some point in the future, likely driven off a publicly editable source.