Skip to content

explor4268/this-webserver-does-not-exist

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 

Repository files navigation

This web server does not exist

A web server where its responses are entirely generated by an LLM based on the request path.

Caution

This is only intended to be a fun experiment. Do not actually use this concept in production environment. It's very fragile, vulnerable, and inefficient due to the nature of Generative AI.

Demo Video

demo.mp4

Usage

No live demos provided at this time. You may test this out by following the instructions below to run it yourself.

  1. Clone or download this repo
  2. Ensure that you have already installed NodeJS version 22 or newer beforehand. Older versions are not tested and not guaranteed to work. The logic itself runs in a single index.js file that uses NodeJS's built-in helper libraries and some recent JavaScript features. Using this approach, this experiment doesn't require any external npm dependencies.
  3. Ensure that ollama is set up. The script uses the qwen3:1.7b model by default (for quick performance when testing) and interacts with it via the Ollama API (for now). Both the model and the endpoint can be changed by editing index.js (inside the getAPIOpt function for the model and after a call of getAPIOpt() inside createServer for the latter). Other options such as the listening port can also be changed inside that file.
  4. Start the Ollama server by running ollama serve and pull some models if you haven't done one or both of it already.
  5. Run the server by executing node . (or node index.js to be specific, both does the same thing)
  6. The script should give you a port an a URL. Test it on your web browser. Example query: http://localhost:8080/about-me. The path can be any sensible path that the model can understand, as long as the file type is supported (and not a binary file).

Related links

Other cool projects and experiments as my inspiration to create this experiment:

About

A web server where its responses are entirely generated by an LLM based on the request path.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published