back to home

nashsu / FreeAskInternet

FreeAskInternet is a completely free, PRIVATE and LOCALLY running search aggregator & answer generate using MULTI LLMs, without GPU needed. The user can ask a question and the system will make a multi engine search and combine the search result to LLM and generate the answer based on search results. It's all FREE to use.

8,729 stars
914 forks
65 issues
PythonDockerfile

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing nashsu/FreeAskInternet in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind-ai.vercel.app/repo/nashsu/FreeAskInternet)
Preview:Analyzed by RepoMind

Repository Summary (README)

Preview

FreeAskInternet

๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ Yeah we have a logo now! ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰

lgoo

Running www.perplexity.ai like app complete FREE, LOCAL, PRIVATE and NO GPU NEED on any computer [!IMPORTANT]
If you are unable to use this project normally, it is most likely due to issues with your internet connection or your IP, you need free internet connection to use this project normally. ๅฆ‚ๆžœๆ‚จๆ— ๆณ•ๆญฃๅธธไฝฟ็”จๆญค้กน็›ฎ๏ผŒๅพˆๅฏ่ƒฝๆ˜ฏ็”ฑไบŽๆ‚จ็š„ IP ๅญ˜ๅœจ้—ฎ้ข˜๏ผŒๆˆ–่€…ไฝ ไธ่ƒฝ่‡ช็”ฑ่ฎฟ้—ฎไบ’่”็ฝ‘ใ€‚

What is FreeAskInternet

FreeAskInternet is a completely free, private and locally running search aggregator & answer generate using LLM, Without GPU needed. The user can ask a question and the system will use searxng to make a multi engine search and combine the search result to the ChatGPT3.5 LLM and generate the answer based on search results. All process running locally and No GPU or OpenAI or Google API keys are needed.

Features

  • ๐Ÿˆš๏ธ Completely FREE (no need for any API keys)
  • ๐Ÿ’ป Completely LOCAL (no GPU need, any computer can run )
  • ๐Ÿ” Completely PRIVATE (all thing running locally, using custom llm)
  • ๐Ÿ‘ป Runs WITHOUT LLM Hardware (NO GPU NEED!)
  • ๐Ÿคฉ Using Free ChatGPT3.5 / Qwen / Kimi / ZhipuAI(GLM) API (NO API keys need! Thx OpenAI)
  • ๐Ÿต Custom LLM(ollama,llama.cpp) support, Yes we love ollama!
  • ๐Ÿš€ Fast and easy to deploy with Docker Compose
  • ๐ŸŒ Web and Mobile friendly interface, designed for Web Search enhanced AI Chat, allowing for easy access from any device.

Screenshots

  1. index:

index

  1. Search based AI Chat:

index

  1. Multi LLM models and custom LLM like ollama support:

index

How It Works?

  1. System get user input question in FreeAskInternet UI interface( running locally), and call searxng (running locally) to make search on multi search engine.
  2. crawl search result links content and pass to ChatGPT3.5 / Kimi / Qwen / ZhipuAI / ollama (by using custom llm), ask LLM to answer user question based on this contents as references.
  3. Stream the answer to Chat UI.
  4. We support custom LLM setting, so theoretically infinite llm support.

Status

This project is still in its very early days. Expect some bugs.

Run the latest release

git clone https://github.com/nashsu/FreeAskInternet.git
cd ./FreeAskInternet
docker-compose up -d 

๐ŸŽ‰ You should now be able to open the web interface on http://localhost:3000. Nothing else is exposed by default.( For old web interface, accessing http://localhost:3030)

How to get and set Kimi / Qwen / ZhipuAI Token?

How to get Token?

We are using https://github.com/LLM-Red-Team projects to provide those service, you can reference to their readme.

Reference : https://github.com/LLM-Red-Team/kimi-free-api

setting token

How to using custom LLM like ollama? (Yes we love ollama)

  1. start ollama serve
export OLLAMA_HOST=0.0.0.0
ollama serve
  1. set ollama url in setting: You MUST using your computer's ip address, not localhost/127.0.0.1, because in docker you can't access this address. The model name is the model you want to serve by ollama. setting custom llm url

ollama model Reference : https://ollama.com/library

How to update to latest

cd ./FreeAskInternet
git pull
docker compose down
docker compose rm backend
docker compose rm free_ask_internet_ui
docker image rm nashsu/free_ask_internet
docker image rm nashsu/free_ask_internet_ui
docker-compose up -d

Credits

Special thanks to our logo designer

AdlerMurcus

<a href="https://github.com/AdlerMurcus"> <img src="https://avatars.githubusercontent.com/u/40649955?v=4" width="100" height="100" class="avatar avatar-user width-full border color-bg-default"/> </a>

License

Apache-2.0 license

Star History

Star History Chart