Free Tools For Utilizing Open Models for Generative OERs and Generative Teaching
"Generative OERs, Generative Learning,..." Part 2: Expanding on Dr. David Wiley's Suggestions
Recapitulation
In the last post, I wrote about generative OERs and generative teaching. Generative OER combines the open-access principles with AI’s content-generation capabilities to facilitate new educational resource development, whether for groups or individuals. Generative teaching and learning is experience-based educational functions that are enhanced by AI tools. In both generative teaching and generative learning, the student must be an active participant in the process.
In David Wiley’s webinar that inspired this post, he talked toward the end about using open models for generative OER creation and generative teaching exercises. This article is mean to provide links and information about some of those services and tools, and discuss my thoughts on them. By no means should you view my thoughts as authoritative. David knows more than I do. I just thought I would provide one more place to access tools for creating generative OERs or using them in generative learning.
I should stress before you proceed further that all of these tools will probably be very slow. Also, before you download any LLM tool, or any genAI tool, locally on your computer, be absolutely sure that you have enough storage and memory, and I would propose also looking at your computer’s graphics card to make sure it can handle the extra strain. The first time I tried to use LM Studio, it blue-screened my computer.
If you are worried about computer performance, I would suggest trying Llamafile first.
LM Studio
This is a free downloadable application accessible at https://lmstudio.ai/. Using this app, you can:
run LLMs on your computer without needing internet connection in the User module,
input up to five documents and use an LLM to generate content from them,
download model files from HuggingFace repositories to develop your own AI models in the Developer module, and
learn about comparable models through the Discover module.
LM Studio does not collect any user data because the company recognizes that private, non-internet connected use of LLMs is widely done for privacy reasons. Privacy is a major reason people use LM Studio. They do not want to damage their user base.
How does LM Studio get their income to support app development, then? The answer is LM Studio @ Work, which is the commercial version of the app.
Llamafile
LLamafile is a free download available through Mozilla-Ocho on github. Mozilla Builders essentially created an interface using llama.cpp for various LLMs to run with a user-friendly interface. Previously, you had to run python commands.
The tool includes the interface, a C language library, and the LLM (the most common one is LLaVA) in a package. All you have to do is download the file, open your Command Prompt or terminal, rename the file with a “.exe” file extension (for “executable”) and run the file. This will cause your browser to immediately open. If it does not run automatically, go to your localhost:8080 in your browser. When you are finished, exit your browser, go to Command Prompt and hit Ctrl+C.
You cannot run any file above 4 GB in Windows, but there are ways around that. Incidentally, this also allows you to change the “weights” of the model (values that impact word order and the predictions of the LLM). If you download only the llamafile software, you can insert weights from other places into the file. I recommend download the weights as a separate file, and then using command prompts to make the computer interact with both of the files.
There have been twenty-eight releases of the tool and there are 45 contributors as of October 11, 2024. You can download it at https://github.com/Mozilla-Ocho/llamafile.
You can also use Llamafiles to run LMStudio-downloaded LLMs, by using commands that include the model name as it was downloaded in LM Studio and a custom command that includes the model filename and “llamafile.”
GPT4All
This is a local LLM interface tool much like LM Studio, with several differences:
you can directly influence the temperature, context window, and other aspects of your interactions
you can download batches of documents and data and keep them as discrete collections. Then, you can have the model interact with as many as 20 files (possibly more) at a time, and you can do this repeatedly without having to re-upload them.
you can connect entire services, such as OneDrive and Google Drive, to GPT4All by having its folders serve as a Collection in the application.
OpenWebUI
This tool, located at https://github.com/open-webui/open-webui/blob/main/README.md , is possibly the most versatile open-source genAI-related tool I have come across, but also appears to be the least user friendly by far. Furthermore, it is mostly available for Ollama-provided AI models and is an open-source extension of that platform.
OpenWebUI can connect to the internet and include web pages in RAG (retrieval-augmented generation)
It can also perform conversation-like browsing.
If you have image-generating models such as Stable Diffusion, you can apparently also use OpenWebUI to interact with those as well.
You can use OpenWebUI to talk to multiple LLMs at the same time in parallel. This means that you cannot switch between different models automatically in the same conversation, but you can have simultaneous conversation with multiple models and manually switch between them.
You can more adeptly design your own LLM models using this tool than you can with other tools. Be aware, though, that all models created using OpenWebUI will be derivatives of the Ollama family.
I will be honest, OpenWebUI is the tool with which I have interacted the least. I do not know much about its capabilities or how to take advantage of its interaction offerings with the web.
AnythingLLM
This tool, conveniently available directly from anythingllm.com, is a desktop application that can run GPT4, an external open-source model, or other models. It can connect with LM Studio as well.
AnythingLLM permits exploration of a wide range of files at an unlimited scale, including PDFs, txt files, Word documents, and more. The lack of a file count limit means you can include more context than LM Studio.
AnythingLLM also has a Browser extension, which means you can include web information in a query to your local LLM model. You can also save web pages and content for later discussion.
AI Agents in AnythingLLM can facilitate document summarization, website scraping (you have to make the ethical decisions), chart generation, and even SQL query generation.
You will eventually be able to locally and manually fine-tune LLMs on AnythingLLM, but for now they offer this service as an order. Training a large language model requires a large amount of memory.
If you know python, you can even create custom instructions for AnythingLLM AI Agents.
There are many more tools and models that one can obtain, but these are the most common ones in my circle that I have also continuously heard good things about. Let me know of any other models or tools you prefer!