Artificial Intelligence (AI), Machine Learning (ML), and Artificial Neural Network (ANN) will all be used interchangeably.
ML features are being steadily added to Firefox. As a user, I'm noticing growing sentiment against these features in general and specifically against their inclusion in Firefox. In the interest of conveying this sentiment, I am posting this discussion and inviting other users to share their opinions and experiences.
Common sentiments I have encountered are as follows:
- AI reduces performance. Firefox has positioned itself as a performance-focused alternative to Chrome. Firefox's user-base increased in large part due to rising resource consumption in Chrome while Firefox remained relatively performant. Any locally hosted ML feature is in opposition to efficiency especially if the user has opted out. An ML model takes up disc space, occupies RAM when in memory, and uses significant CPU power when run.
- AI jeopardizes privacy. To reduce performance cost, ML features may rely on external services to run models. These services jeopardize the privacy of users by communicating browsing habits to third parties. The new preview summary feature seems especially relevant here. Should this feature be run externally, it would reveal a user's exact browsing destination. Additionally, ML models train on data. ML providers are eager to acquire more data to improve their service. If a provider were to so choose, they could harvest user data for their own purposes such as selling to advertisers.
- AI is anti-environmental. Due to the high resource usage of ML services, there is significant power consumption, hardware acquisition, and coolant usage in these markets. While Firefox does not outwardly position itself as an "eco" browser, AI usage as a service is inherently anti-environmental. This is the sentiment whether you consider the resource usage significant or not. These concerns are exacerbated when training these models is taken into account. All LLMs use large amounts of resources to train and usage of these models serves to subsidize that upfront cost by expanding the user-base. When examining environmental cost per user, the more users, the more this upfront cost can be divided up among users.
- AI is biased. Models are a result of their training data. Training data in the ML field repeatedly skews towards the demographics of the people producing it. LLMs are commonly only available in select languages that it has trained on and the sentiment is that languages other than English (the primary language of most AI providers) are deficient. Recently focus has been put on LLMs capacity to be manipulated by their owners such as the high-profile case of Grok seeming to have been changed to match the opinions of its owners.
- AI is inaccurate. LLMs are infamous for being unable to solve simple problems or disagree with the prompter. When a user prompts an AI, they expect that the AI will be accurate. Giving a model tacit Mozilla approval by including it in Firefox leads to a user’s expectation that the service will work, regardless of disclaimers. Users do inherently understand how ANNs work and trust that the tool they are using is of quality. An inaccuracy in a model is a bug with the software and Mozilla should expect that growing adoption will mean bug reports in this vein.
- AI models are vulnerable. LLMs are vulnerable to "prompt injection" attacks. This leaves summaries vulnerable to intentional misinformation hidden in the text to maliciously change the summary. Prompt injections could be used to provide a veneer of authenticity to bad actors and make them appear to have Mozilla’s approval just by them being repeated by a Mozilla-approved LLM.
With all this being said, the Mozilla forums are not a representative sample of the Mozilla user-base. My hope is that the opinions of these unrepresented users can be taken into account and constructive discussion can occur. Even if you disagree with the basis of these sentiments, this is user feedback and a request for the direction of the Mozilla project.