06-21-2024 11:55 AM - edited 07-09-2024 01:20 PM
Hi folks,
In the next few days, we will start the Nightly experiment which provides easy access to AI services from the sidebar. This functionality is entirely optional, and it’s there to see if it’s a helpful addition to Firefox. It is not built into any core functionality and needs to be turned on by you to see it.
If you want to try the experiment, activate it via Nightly Settings > Firefox Labs (please see full instructions here).
We’d love to hear your feedback once you try out the feature, and we’re open to all your ideas and thoughts, whether it’s small tweaks to the current experience or big, creative suggestions that could boost your productivity and make accessing your favorite tools and services in Firefox even easier.
Thanks so much for helping us improve Firefox!
09-04-2024 04:48 PM
Respectfully, I expect better of Mozilla than encouraging environmental harm for entertainment.
09-09-2024 08:20 AM
So if the reponses these bots give are "correctly" not to be trusted, do you think that some people finding the prompt results "entertaining" is worth the mass data scraping, the plagiarism and theft, and the considerable energy cost required to run these machines? If we're still talking about whether they have a use in the first place when the negatives are staring us in the face, why is this being added as a core feature to the base version of the browser?
If someone at Mozillla is dead-set on making this, then at minimum make it an extension so that people can opt-in to having it on their computer in the first place, rather than directing a mass audience to use an external private tool based on stolen data that *correctly* shouldn't be trusted.
09-09-2024 03:16 PM
Firefox is a web browser, not a game. People don't want Firefox to entertain them, they want it to do the job they downloaded it for: enable them to browse the internet with relative privacy. Chatbot integration degrades the privacy protection of the browser in, frankly, unacceptable ways.
Those who wish to use a chatbot for entertainment can use the standard web browser functionality to navigate to one on the web.
07-02-2024 11:19 AM
I want the AI chatbot feature, it's free, unlimited and no account login, and there are no geographical restrictions, and I don't want to be blocked because I'm in China
07-05-2024 12:20 AM
Are you able to download and run a https://llamafile.ai as that would be free, unlimited and no account login. It runs on your computer, so you don't need internet access to chat, but it could be slow depending on your hardware.
07-02-2024 11:26 AM
It would be even better if AI could be used for search, enabling an open-source AI model locally, and when searching, the AI aggregates the search results and answers them in the form of a human conversation, just like a chatbot
07-05-2024 12:25 AM - edited 07-05-2024 12:26 AM
Are you suggesting the chatbot to look at the search result page content or would you expect Firefox/chatbot visit some linked pages for you? There are AI services that do this like https://perplexity.ai that you could configure as a custom provider and also supports passing in your custom prompts.
07-06-2024 11:26 AM
I want the chatbot to look at the content of the search results page, summarize it based on the content, and answer in human language
07-12-2024 10:50 AM
Something like this?
07-13-2024 05:48 AM
yes
07-03-2024 08:49 AM
Loved the feature, I always have a open tab just for chatgpt, so its a big win, and its nice that it can be disabled or opt in/out for people that don't like it.
What I miss is just a way to simply open the chat on the sidebar, the only way I found is to right click and chose one option in 'Summarize', 'Simplify Language' or 'Quiz Me', but neither of these options can be used to just open a plain clear new chat, and if I close the sidebar, there is no way to open it again on the previous conversation.
What i would suggest, is simply a button on the bookmark toolbar, to simply open close the chat, that would suffice already 🙂
07-05-2024 12:36 AM
You can Customize toolbar to add a sidebar toggle button, and the chatbot will open a plain new chat. If you are logged in to some chat providers, you should be able to access previous conversations as well. Potentially Firefox could restore the last conversation, but others might prefer a clear chat, so maybe we could restore when reopening as well as provide a button to start new?
The improved sidebar also has a button to open/close the AI chatbot (as well as other sidebar tools).
07-04-2024 10:22 PM
Hey
Thanks for the work done.
I do not understand why you plug only onto private llm services where some of them are using private llm
My ideal is to be able to connect to llm model server such as https://github.com/ollama/ollama
Or https://github.com/vllm-project/vllm
Best through proxies https://github.com/BerriAI/litellm
Is it possible?
07-05-2024 12:43 AM
You can configure a custom provider to any url including those running on a local server such as your own computer. At a glance, it looks like there's various web UIs for ollama that can be separately installed, so you could at least have a private chatbot in the sidebar, but I'm not sure if any of them currently accepts passing in prompts. https://llamafile.ai is another llm model server that has a built in web chat interface that accepts ?q=prompt urls.
Potentially Firefox can directly interface with these services to render a custom Firefox chatbot or power non-chatbot experiences instead of currently requiring a web chat interface.
07-06-2024 01:38 AM
Hey guys, i just tried to nightly version, with AI stuff.
Honestly it's good, but it would better fit the standards of privacy Mozilla displays if it was possible to download and run models (E.G: 7b llama3 or Mixtral models) from the web browser (considering the pc running firefox is powerful enough). In a similar fashion of GPT4All.
It would allow for a less-powerfull yet completely offline model execution.
Just food for thoughts, it's similar to the "localhost" approach with locallama, it's not very practical on windows for example, it would require a docker instance running on wsl to make something clean.
Peace - Spid. ✌️
07-12-2024 11:00 AM - edited 07-12-2024 11:00 AM
I believe GPT4All exposes an inference http API but not a chatbot page that could be shown in the Firefox sidebar. Currently Firefox is rendering (potentially local) server-provided html in the sidebar, but we could build our own Firefox chat interface to show the results of inference -- potentially handled directly within Firefox similar to translations and alt-text or a server of your choice such as GPT4All running your preferred model.
Are there any particular use cases that you think must be handled with a local model?
07-11-2024 10:35 AM - edited 07-12-2024 10:55 AM
This whole thing flies in the face of the Mozilla Manifesto.
07-12-2024 11:14 AM
Are there privacy-focused models that you would recommend? This chatbot feature is compatible with locally running llamafile with models that can be trained on data you find acceptable. Even if the quality of these models aren't comparable yet, hopefully we can help others try them out and contribute to these efforts so that we can get them to a quality level to include by default in Firefox.
07-12-2024 05:04 PM - edited 07-12-2024 05:31 PM
I believe the bigger issue to people isn't a perceived lack of ability to use models that are:
Rather, the core issue is that it seems Mozilla is promoting and incentivizing people to run models that are:
Furthermore, these models also:
Even the Mozilla manifesto—I know it's not a rigid set of internal rules or anything, but just for the sake of argument—boldly states:
Then there are additional concerns, such as the tight coupling with the browser (due to reasonable limitations or not is beside the point), or the fact that Mozilla is, in a manner, endorsing and enabling these companies and their practices.
So it's understandable that this feature just isn't right to a lot of people.
I understand Firefox must remain a competitive browser if it wants to regain market share. I am aware that many are implementing similar features, and nobody wants to fall behind. I know that many users are, in fact, quite excited about this feature. Maybe the developers are, as well. Hell, even I admit I'd love to have a somehow technomagically ethically-sourced and ran LLM helping me around the web.
However, I imagine this is a very sensitive topic for a significant amount of people. There is a lot of frustration with current AI trends and big tech, especially among the most tech-aware users. I think it's safe to assume that if this reaches stable as it is, there will be much valid complaining and criticism towards Mozilla. Either you already knew this, or you didn't think far enough ahead.
Sadly, I'm not sure what's the right answer here. I sincerely hope you folks manage to figure it out.
But asking...
@Mardak wrote:Are there privacy-focused models that you would recommend?
...with all respect, seems very tone-deaf considering all this.
07-12-2024 06:28 PM
I asked about privacy-focused models because there was a specific comment about training data before it was edited and made anonymous.
07-12-2024 06:51 PM
I see, I apologize for that, Mardak. Seems I assumed much.
Despite that, I hope the rest of my comment still provided some value. I was going to post something like it regardless, and just happened to read your response on my way there. The main point, of promotion, is my main concern with the feature.
07-19-2024 11:31 PM
The current interface for listing compatible providers doesn't really promote any in particular, and similarly there's no default choice either as users get to decide to turn it on and which to use. Additionally, the sidebar allows for easy switching to different providers as yet another reminder to help people discover alternatives, and this will likely be helpful when we add more choices that could include local inference of open models trained with more ethically sourced data.
There is work in providers supporting passing in prompts and showing responses in the sidebar, and there's even more on Firefox's end for a local inference chatbot. Do you think it's reasonable to start with what's already working to provide value to those who would already be using an existing chatbot while we work on better alternatives?
07-24-2024 09:05 AM
Sorry for the delay. I wrote a big reply trying to explain my thoughts, but apparently that wasn't posted. Either I forgot to click Reply, or Connect somehow ate it. A bit annoying, since this isn't the first time something like this happened, and there isn't even a draft saved on my profile... that makes me a little sad. Here's my attempt at recreating that comment:
I don't believe that because something is opt-in, has no default, and is easily switchable, it means you are incapable of promoting that thing. It took effort to put those options there, and Mozilla chooses which providers are easily picked with a click. People will be more likely to use those options. Most people will use those options. That is a form of promotion in itself, even if unintentional, even if smaller. Mozilla cannot claim to not be promoting these in some capacity when you take into account the full context of what including them as options means, long-term.
And I imagine the easy switching sidebar thing goes the other way around, too: given the trends we've seen in the field so far, even users that are painstakingly convinced to give future safe/open/private AI a chance will be able to quickly switch to a commercial alternative and see how much better they are (actual information quality notwithstanding, of course).
>Do you think it's reasonable to start with what's already working to provide value to those who would already be using an existing chatbot while we work on better alternatives?
That's a tough one. You should know that I'm very biased in this topic, so even if I try to answer from an objective point of view, my perspective is... not very positive. I do think it's relevant, though. So, to make an attempt:
It would be if the AI landscape wasn't currently such a nightmare. I believe releasing the feature with "only what's already working" will look quite bad. Mozilla might explain, we're working on offering better options soon, but then folks will ask, why didn't you do so from the start? And if the answer is we didn't want to be late to the market, well, that actually makes it worse. It feels rushed, and uncaring of the users who have a complicated relationship with the related technologies.
I know something in Nightly or Beta behind an option shouldn't be taken as released. Unfortunately, news travels fast and there's always people keeping an eye on and sharing what's happening in Nightly, and a lot of users don't fully grasp that—or might even forget it outright—in the heat of discussion. Even those aware of this might find themselves worried that, should they not make their voices heard loudly enough, it'll be too late to avoid something they consider bad. I don't say this to excuse rude people around the internet spreading misinformation, but to remind everyone that the human element can be difficult to handle even when you do everything right.
I think holding on until you have something better to show... will still make some people mad, of course, that's our lovely community. But I imagine it'd be significantly better, since Mozilla would have something more to show and point to than, respectfully, a chatbot sidebar that by default lets you easily pick between: questionable startup, shady startup, big tech, questionable startup, and finally, another questionable startup. All engaging in a controversial field with legislation years behind.
For comparison, everyone I've told about (and explained the context behind) project Bergamot has found it fascinating. It's bucking the trend, uncompromising, boldly stating: no, we will not surrender privacy for translations. We want both, and we'll have both. Meanwhile, most people I talk to about AI are tired and just about done with the topic (reminder: I'm biased). Releasing a sidebar that, to most users, only connects them with popular AI chatbots gives the impression that Mozilla is merely chasing a trend without deeper consideration.
I don't know how much more is within the scope of what you folks can reasonably be expected to do, though. What if it takes too long? What if it doesn't work well enough, or doesn't work at all? And if developers lose motivation, seeing a nearly complete feature languish instead of reaching users? If the situation changes and work has to be scrapped?
I don't have answers to these questions, which is why I end up considering more extreme options despite not really liking them, such as halting development of the feature altogether, for now.
Hope that helps.
07-27-2024 08:05 AM - edited 07-27-2024 08:06 AM
Mozilla works in the open transparently and collaboratively with the community so that we can explore, iterate and learn from feedback on the way to releasing a feature for the diverse general audience. Practically some things won't be as polished early on especially this feature that relies on what chatbots and/or models are compatible.
Since we began this exploration, there's been significant progress in AI landscape such as https://llamafile.ai now with OLMo-7B (Open Language Models) allowing people to locally run a private open-source model trained on open data, so this feature supporting user choice could help grow even more truly open-source LLM efforts. However, this also isn't quite practical for average Release users, so again should we have waited until all of these are ready and polished before we even start landing code in Nightly?
07-27-2024 08:53 PM
I think it depends on what Mozilla intends for this feature to be as it reaches stable Release.
Does Mozilla plan to, as an example, allow running a local model with the same effort as other options by an average user? As in, with Firefox managing download, installation and running the model for less tech-savvy users.
If deeper integration of local/open AI is...
Maybe I missed a statement, but I've only seen mentions of possibilities, no concrete plans.
>This chatbot feature *is compatible with* locally running llamafile
>when we add more choices that *could include* local inference of open models
>You *can configure* a custom provider to any url including those running on a local server
I'd really like it if Mozilla clearly and explicitly stated its plans regarding integration of local and/or open language models for this feature.
09-08-2024 10:17 AM
Sorry, but even if OLMo markets itself as an "open" model built from public domain data, it isn't that. A quick look at their dataset shows that the largest source of data is Common Crawl, which absolutely contains copyrighted content.
07-18-2024 01:43 PM
I understand that the purpose of this discussion is to promote community engagement in the AI so that there might be more adoption from those most involved and thus create a better product, but at the core of it those who care most about the product see this addition as a wrong path. It seems like the discussion at large has shown that the core audience of the browser do not want AI, and that chasing this dragon will only lead to financial peril and a frustrated user base.
07-19-2024 11:42 PM
Are you suggesting that those who use ChatGPT or other any AI would not be core Firefox users and wouldn't be interested in using Firefox more even if it improved their experience and privacy?
09-09-2024 10:44 AM
In what way would Firefox integration improve the privacy of a ChatGPT user?
Either they are a paid-up OpenAI customer and have contractual terms that say that their data won't be used for training, or they're using a free account and they're willingly (or unknowningly) throwing their privacy out of the window.
No amount of integration in a browser (or OS, or other app) is going to change that fundamental hyper-funded AI corp vs user balance.
And no, having other models available won't change much either. If they're a ChatGPT user then they'll see ChatGPT and use it. If they become concerned about their privacy then they'll look for other options using a search engine. No amount of "these options are in a list that most people will look at once, set to their preferred option, and then never touch again" is going to materially impact user behaviour.
09-08-2024 10:11 AM
Sorry, but user privacy is not the only thing here. There are no large language models with any meaningful capabilities that are not trained on stolen copyrighted data. That in itself is a privacy violation of millions of third party people. Doesn't matter if the model's weights are open source. It is still built from stolen data.
09-08-2024 10:26 AM
No, because, as you are aware, such models do not exist. It would be more honest to either not respond to AI critics at all, or directly tell them you (and Mozilla) don't care about their concerns.
09-08-2024 07:16 PM
you keep replying to people giving critiques over the privacy issues of all LLMs with requests for... LLMs without privacy issues? you aren't even reading what you're replying to. we're all saying the entire thing is needless and ethically unsound, not that we want you to use different models.
07-13-2024 03:34 AM - edited 07-13-2024 03:37 AM
Please stop destroying Firefox, pretty please.
Also, this is already possible. It's called tabs and copy paste. Those who want to use plagiarism machines can do it that way.
07-13-2024 08:31 AM
People are also plagiarism machines. You and me. So what? Mozilla must go to such experiments, otherwise it will be left out. This is progress, and the people who prevent it are no better for me than green activists: they do nothing themselves and do not let others do it.
07-13-2024 09:58 AM
🐑
09-07-2024 06:03 PM
It is a good thing to be left out of a fad. Fads are fleeting and everyone ends up worse for going along with the obviously terrible idea once the bubble has popped.
09-08-2024 01:11 PM
"This is progress"
If this is progress then progress can jump off a cliff.
07-19-2024 11:57 PM
Are you referring to the Summarize option responding too similarly to the original text? Do you have suggestions for alternative prompts that are less likely to repeat previous works?
07-20-2024 01:06 AM - edited 07-20-2024 01:22 AM
He might be referring to the larger discussion around LLMs generating output that is eerily similar to the training inputs. "Plagiarism machine" is a term used derogatorily by people against current uses and production of the tech to refer to popular AI and LLM models that took the world by storm. See the Copilot case for a related example. In general, this discussion around the internet also touches upon the morality of LLM training using enormous datasets composed of many works whose authors did not consent to it, and who wish not to be included.
07-13-2024 05:56 AM
Firefox can now have multiple chatbots, but none of them can be used in China, so I recommend adding chatbots from China. For example, ERNIE bot, the Chinese name of this bot is called “文心一言”. And the other one is Kimi Chat. Their official website: Kimi Chat • SmartAI 文心一言 (baidu.com)