Yesterday I mentioned 2 options to add a generative AI to the command-line (AI chat models at the command line) one of which is useful (gorilla_cli
) while the other is more informative on how to implement a Large Language Model (LLM) on a computer. Both are free options.
Yesterday I also discovered this Blog: Generative AI in Jupyter which is a description of the brand new implementation of AI within Jupyter notebooks.
The AI entity is named “Jupyternaut” and can be powered by a long series of APIs (Application Programming Interface) for LLMs from external providers, all of which require a paid subscription. However, the LangChain interface should be able to let user access their own local models (in fact this is why and how I found the LLM yesterday!)
As as exercise I was able to implement Jupyter AI from the information both on this Blog but also from the GitHub repository and also the at documentation.
It seems like a cool thing to do, but only with a paid API which is implemented as a API_KEY
I might go back to this if I can get a local LLM to work, for example replacing the LLAMA7B
version with the LLAMA65B
version mentioned in the previous post