When I first created this blog, I spent a week wondering what to call it. I even asked ChatGPT what to call it (although I did not use any of its suggestions). Then, like ray of hope from heaven, the word “CollaborAItion” entered my mind. Since that time, I have been searching for a tool or set of tools that would:
enable truly collaborative AI opportunities without encouraging users to veer into automation
adequately fulfill all of the requirements of education
be usable in the workplace, in order to “cultivate consistency from the course to the company”
My latest webinars and blog posts have talked about BoodleBox and its abilities. I use it occasionally to write my blog post outlines and create basic materials for my presentations.
Although many generalized tools (ChatGPT, Claude) have made overtures to the field of education and training, there are multiple issues (ethics, cognitive offloading, authenticity, privacy, etc.) that impede educators and institutions from incorporating AI tools and products (materials created through human-AI collaboration) into their curricula and teaching practices.
These same aspects transfer very well to business and administrative operations, and so BoodleBox is usable even outside of its target audience. Many companies are experiencing similar issues to educators in course settings. For example, schools and companies alike have dabbled with “AI automation” and are having to retract their previous goals and declarations about reducing formal and traditional education practices or their human workforce, respectively.
BoodleBox is intentionally designed for education in multiple ways, including
enhanced privacy protection,
deliberate design to permit human-AI collaboration while discouraging AI automation (think MagicSchool AI), and
teacher oversight of chats that does not require logging into separate systems
This article reviews how AI tools are made, what implications that process has for education issues, and how BoodleBox addresses many of those issues to become one of the only truly secure and pedagogically sound LLM tools. Finally, it discusses how the features that make BoodleBox such a good fit for education also enable beneficial use in the workplace.
We Know How LLMs Are Built
In my most recent post, I talked about how I needed to create a new description and visual for the way that LLMs are trained. I discussed two main stages of “training” LLM tools.
Before the “model” is created, a massive corpus of text material is converted into values and vectors, and the syntax of communication is expressed through multiple-dimensional networks of words and associations.
After the model is created, training takes on a different form. Conversations are analyzed as text to understand how humans use context to determine if a response is appropriate. More direct text feedback is gained through Reinforcement Learning with Human Feedback, in which human trainers directly assign scores and direct the LLM to respond in ways that mimic real user interactions.
I also noted that after the formal second stage of training, when the model is released, the models continue to ingest and train on conversations with users. ChatGPT initially trained on all of its user interactions, but after widespread criticism they provided an opt-out. Other models provided opt-outs after this as well. Some people are comfortable with their entire chats being used to train AI, but even those users may not be aware that most AI providers are not compliant with most privacy regulations (for example, HIPAA and FERPA).
Major Issues with AI Tools, Both General and Specialized
Privacy and Confidentiality
Training after the model has been released necessarily involves direct text derivation and analysis, which means that no conversation or discussion will be private. Yes, the user may “own the output and input,” as OpenAI states in its Terms of Use, but AI tools will still be able to train on the data; providers will also be able to sell that data.
The mere fact that the default setting on most AI tools is to train on the verbatim text of user prompts and AI outputs ensures that most educational institutions will not consider using ChatGPT or Claude, at least not the free versions. If the institution is a K-12 school, then many students are not eligible for accounts in the first place. If an institution is a college or university, then they still have basic data privacy issues to consider, not to mention FERPA.
Data privacy concerns are also related to confidentiality concerns, which means that unless businesses and organizations pay for Enterprise, data-”protected” accounts, they have the same vulnerabilities as educational institutions, just with different stakeholders and purposes attached.
Dependency on AI Tools
As I write this, I am reading a study (and commentary about said study) by MIT that analyzes the differences in cognition between students who used ChatGPT via collaboration (writing drafts and notes about projects before they used ChatGPT) and automation (going straight to ChatGPT to create their product). In a controversial conclusion, the study publishers stated that the cognitive abilities of those who used collaboration increased much more than those who automated the process. Media outlets portrayed this as the AI “frying” neurons and using other loaded language, but the original study had a point: AI automation breaks the Third Law (see below). Students are, indeed, offloading their cognitive loads and responsibilities to AI tools, and this is a detriment in many ways. One of my first posts, “The Dangers of Complete AI Automation,” covers the hazards of this tendency:
Lack of Supervision and Oversight
With the exception of “enterprise” accounts, AI tools in general offer supervisors and educators no ability to look over the conversations of their students with their AI tools. Even if educators somehow procure access, they have to go to an external service for each student. Even if educators have enterprise accounts, they also have to log into each different tool their students use for whatever media they have generated.
Speaking about education alone, this is not feasible at all. Educators have regular assessments to grade, lesson plans to write, department meetings to plan for, attend, and make action plans from, and more responsibilities. Supervisors have similar responsibilities at their organizations. No one has time to keep track of all the accounts and tools their subordinates are working with, much less to read in-depth their transcripts and ensure they were “putting in their fair share” in the conversations.” This is the responsibility of the students, and if educators have to do this, there should be some way to help and minimize the time necessary for those tasks.
Specialized Tools Also Have Issues
Lest I be accused of forgetting specialized tools, I have not. I have written about specialized tools multiple times before, and they absolutely have their place! I still use Ideogram over ChatGPT when I am creating images with text, for example. However, I have learned increasingly that specialized tools that actually do what they say they are going to do are few and far between. My list of suggested AI tools for specific media has largely remained unchanged since two years ago when I first wrote it.
For more information about the hazards of using specialized tools, look at one of my first blog posts, specifically the section about “Right Tools: Generalized vs. Specialized.”
Creating a framework of many specialized tools to do every step in the process can lead to another form of AI dependency, although there are multiple types of tools instead of just one—passing output from one bot to the next for each phase, until before we know it AI tools in aggregate have completed 90 percent of the project.
In my first article quoted above, “Dangers,” I noted that the term “workflow” had been co-opted by the AI practitioner sphere to mean “a series of machines that go through multiple creative and iterative processes before providing a response to the user.” We need to make sure that our AI use is not simply passing our work from one AI to the next.
We need to keep focus, which many times means using only one AI tool or two if necessary; we need to include other humans in our conversations with AI to preserve human contact, collaboration, and oversight; we need to protect data privacy, both through our prompts and through our recommendations to students, and need an AI tool that will not pass data back to providers.
What is BoodleBox?
BoodleBox addresses all of these issues and more, and the fact that it tackles both privacy and automation issues is cause alone to prefer it above other AI tools. Some tools lack the privacy controls but require human input, while others have strict data privacy rules (and boast about their independent scoring) but build their entire appeal on AI automation. BoodleBox takes a more holistic approach.
BoodleBox integrates multiple AI models from different providers, allowing users to interact with various tools simultaneously while maintaining context awareness across conversations. This approach encourages users to think deliberately about which AI capabilities are appropriate for specific tasks, rather than defaulting to a single, general-purpose tool for everything.
Most importantly, BoodleBox was designed with the understanding that AI tools should enhance human capabilities rather than replace them. AI tools are not always involved in conversation (although it is the default if only one user is participating in a conversation). Human groups can work together, share files, and collaborate meaningfully, and then call an LLM to contribute. Educators can join groups or supervise in a more hands-off manner.
The platform's features actively discourage automation while promoting meaningful collaboration, critical thinking, and human oversight—all essential components for both educational integrity and professional development.
Not only does collaboration impact the user as an individual or in smaller groups. BoodleBox users communicate as a larger group in BoodleU, a forum created specifically for training and use case-sharing, but also in various councils, which are centered on leadership (supervisors and educators), student experiences, and ethics. The users are directing AI and its development in micro and macro at BoodleBox. This has led the development team to provide several advantages over the other tools vying for educators’ attention.
BoodleBox Has Several Advantages Over Mainstream Tools
The most obvious logistical advantage that BoodleBox has over other tools is that it offers integrations with more than ten educational management systems, including Moodle and Canvas. But this advantage is, to be fair, superficial. It says nothing about the user experience.
Where appropriate and salient, in the interest of being as accurate and authentic as possible, content below has been pulled from the BoodleBox FAQ page.
Multiple LLM Providers
BoodleBox has entered into contracts with multiple LLM providers, including OpenAI, Anthropic, Google, and Perplexity, to use their models through APIs. On their end, they developed a system to give users the ability to call Perplexity, for instance, in a conversation that had already been initiated with ChatGPT 4o. They They added text-to-image generators like Flux and Ideogram, and bots for research like Semantic Scholar. Then, they developed custom Bots that incorporated the generalized models to help themselves and users with whatever projects they need. In this way, users are not pigeonholed in each conversation they have.
Communicate with Multiple LLMs in One Conversation
In my post at the beginning of this year, I reiterated my “AI-instrument” comparison. I compared using one LLM to playing a piano and the nuance and subtleties of manipulating the instrument to create different types, durations, and combinations of sounds. At the end of the article, I was inspired by my experience playing organ in church to compare multiple-tool AI-enhanced workflows to that “king of instruments,” as Mozart called it.
In BoodleBox, we have a pipe organ of enormous magnitude, like the one at the Salt Lake Tabernacle.
Copyright 2017, Intellectual Reserve, used under fair use doctrine.
While all LLMs incorporate the same technology and use processes (chatbots), each one has its own strengths, abilities, and biases. Claude, for example, is better for more communicative writing; ChatGPT excels at more technical and direct communication. Other LLMs focus on writing and improving computer code. Still others use the conversational abilities of more general tools but hone them for specific purposes, like editing writing or creating Boolean search queries. These are Custom Bots.
The environment of a BoodleBox chat allows the user to call multiple Bots in one conversation. For example, I like TranslaterBot, created by my friend Zach Kinzler, when I am talking in Waray-Waray, a language I learned when speaking in the Philippines. But this bot really does not work well when talking about the social and political issues in the Philippines, unless I want to learn words related to politics. When I research about the issues, I turn to Perplexity. When I write about them and want to incorporate all the passion I have in my heart, I turn to Claude to brainstorm sentence ideas.
In addition to creative benefits, incorporating multiple tools in a conversation provides critical thinking skill-building opportunities as well. Rather than becoming dependent on a single AI provider's approach to problem-solving, students can compare responses across different models, developing critical thinking skills as they evaluate varying perspectives.
Include an AI with Human Members in a Group
BoodleBox facilitates collaboration between students and professors by enabling them to collaborate with multiple AI models, classmates, and faculty seamlessly. It fosters thought-provoking discussions that fuel critical thinking and creativity. Students can easily cite and share AI-assisted work. Professors can gain insights into individual and group AI usage.
As I mentioned above, it is entirely possible for participants in groups to work together without involving AI at all. This is called “Message Mode,” and is an important part of “collaborative” rather than “automating” AI work.
Retain and Share Knowledge Documents and Chats
Knowledge can be shared in multiple ways on BoodleBox by leveraging group chats, incorporating your own documents, and sharing chat folders. Students and educators can use this to help learn content, and educators can use it additionally to examine student behavior if they want. I would encourage educators to use this functionality to primarily look at aggregate learning gaps or needs rather than assessing individuals.
Retention and re-use of data can even happen mid-conversation. If you see your work going in multiple directions in the middle of a project, you can “fork” your chat into two or more chats—one with your original goal and the others with the exact same context, content, and knowledge base up to that point.
Groups of chats, documents, and other materials, known collectively as “boodles,” can be organized into “boxes” (hence the name of the site) and shared with whomever the user wishes.
Facilitating Human-AI Machine Collaboration Instead of AI Automation
The other sections related to benefits have talked about the “how” of BoodleBox’ collaboration facilitation. This section will go over why that emphasis is important.
The central goal of BoodleBox, helping students and educators develop collaborative skills and avoid cognitive offload, is in keeping with the Three Laws of Human-AI Machine Collaboration:
The First Law: A human may not use generative AI to injure a human being or, through inaction, allow a human being to come to harm.
The Second Law: A human may use generative AI tools for any objective they desire, except where such objectives would conflict with the First Law.
The Third Law: A human must engage in good-faith, literate collaboration with AI tools in every interaction as long as such collaboration does not conflict with the First or Second Law.
Zeroth Law: A human may not use generative AI to injure humanity or, through inaction, allow humanity to come to harm.
Asimov's Three Laws of Robotics
“The Machine is only a tool after all, which can help humanity progress faster by taking some of the burdens of calculations and interpretations off its back. The task of the human brain remains what it has always been; that of discovering new data to be analyzed, and of devising new concepts to be tested.”
BoodleBox’s emphasis on collaboration also facilitates public transparency. Through sharing the chats, boodles, and boxes a class or workplace has used, stakeholders will be able to see exactly when and how AI tools were collaborated with. Collaborative projects can be tracked, managed, assigned, and evaluated all within the BoodleBox environment. Chat folders incorporate groups of chats and knowledge documents that can be shared between faculty and students in assigned groups. Obviously, this sharing would be done in accordance with FERPA if students are involved.
Creating Custom Bots for Course or Team Use
In addition to the general external LLMs and the Custom Bots provided by BoodleBox and other users, educators can create Custom Bots for their own courses. They can incorporate assessments, reading materials, images, and other materials and can restrict the tools that their students use (though this is best used sparingly).
I took the opportunity to develop ten Custom Bots, each of which I use depending on the groups with which I am working. In BoodleU, I shared my favorite Custom Bots that I have created so far. These Bots demonstrate not only the ability to incorporate external materials, but also the versatility in the types of responses one can elicit from the tools. If you like comic opera, I heartily recommend SavoyGPT.
Most Important: Privacy (We’ll Talk About This Later On)
One of the most important aspects of BoodleBox, particularly for K12 institutions, is the fact that the tool was deliberately built with privacy in mind. Unlike mainstream AI platforms that default to training on user conversations, BoodleBox prioritizes data protection and confidentiality from the ground up.
The privacy concerns extend beyond just educational settings. As I wrote in "The Danger of Complete AI Automation," one of the key hazards of AI use is "Privacy and Confidentiality: With whatever AI tool we use, we should always be careful to safeguard our privacy and confidentiality, as well as those whom we serve and for whom we work. The ease of automation, as well as the appealing nature of having a significant aspect of our work done for us, can cause us to put our guard down."
BoodleBox addresses these concerns through its architecture and policies, ensuring that conversations remain private and that data is not used for training purposes without explicit consent. When using third-party AI models, prompts are anonymized to protect user privacy. BoodleBox does not train AI models, so your data will never be used by BoodleBox to train an AI model. If you use a BoodleBox controlled model (such as LLAMA3), then your data never leaves the control of BoodleBox. If you use a third-party model (such as one by OpenAI or Anthropic), then your prompts are provided to the third-party API, but anonymized so they cannot be connected to you as an individual.
Unlike other education-centered AI mediators, BoodleBox themselves do not insist on retaining data for their own purposes. BoodleBox collects name and email information, platform usage statistics, as well as voluntary information shared via surveys.
Not only is BoodleBox FERPA compliant, they are also SOC2 Type II, GDPR, and HIPAA compliant. Explore their comprehensive privacy and security statements for more information on this, and watch out for a future blog post that examines this feature of BoodleBox.
Good for Education, Good for Work
BoodleBox's design principles make it uniquely valuable across both educational and professional contexts. The platform's emphasis on collaboration—consistently and meaningfully interacting with AI tools rather than simply automating processes—addresses core needs in both environments.
Privacy efforts, for example, that were created for education purposes will also protect confidential data of organizations and businesses.
Team and group chats and the ability to fork and share, though created for educational projects, allow seamless integration of AI tools into administrative and office workflows.
Custom Bots, created for educators to include course materials in chatbots for specialized communication with students, can communicate about company policies, workplace standards, safety regulations, and other important documents.
Possibly, there is even another benefit for business that was not intended for students—innovation and proactivity. Users at all levels of an organization can use AI tools to brainstorm all sorts of responsibilities and workflows and initiatives before they present them to their coworkers or superiors. In fact, I created a Custom Bot to discuss such potentially complicated issues and decisions as these.
Cultivating Personal Consistency from Course to Company
Another way that BoodleBox is good for the workplace as well as education is that users can actually keep the account they had during their school years and use the chats and boxes in their career. BoodleBox changes and adapts as you change and get older. You can transition it from a coach in high school to an assistant in college and your career. You can also have it take different roles in different chats as you learn new skills. BoodleBox, in this way, facilitates personalized learning that may not require formal education environments. The link goes to a webinar that I did with Library 2.0 on using genAI tools for personalized learning. While I did not use BoodleBox in my examples, the skills and ideas in the webinar can be easily transferred.
Cultivating Consistency from the Course to the Company
My First Presentation of the AECT 2024 International Convention
Conclusion: Engaging in CollaborAItion with BoodleBox
As I wrote in that very first post on this blog (which I encourage you all to read—it brought back some memories and affirmed that my current projects are still in line with my original purposes—“[c]ollaborAItion is the concept of consistently and meaningfully interacting with AI tools and their outputs. … Rather than passively accepting AI-generated content, CollaborAItion encourages users to actively engage with the material, question its validity, and integrate it thoughtfully into their work.”
BoodleBox's design encourages what I've called "deliberate creation"—a mindful approach to using AI that maintains human agency and creativity while leveraging AI capabilities. For a quick over view of the interface, look at this blog post. This aligns with my observation that "creativity and productivity are both centered around the creation of something, for specific goals, and for a specific purpose." By facilitating true human-AI collaboration rather than automation, BoodleBox helps educators, students, and professionals maintain their creative and critical thinking skills while enhancing their productivity.
Coming Up: Creating an Ethical AI Framework (BoodleBox Will Be Included!)
As artificial intelligence tools increasingly shape how libraries serve their patrons and support academic missions, librarians must play a pivotal role in establishing ethically sound practices. This workshop offers participants a structured, accessible approach to identifying and addressing the ethical concerns surrounding AI technologies in library environments. Participants will explore key issues such as data privacy, misinformation, algorithmic bias, academic integrity, and authorship—grounding these topics within the frameworks of information literacy, labor ethics, and professional responsibility.
The 90-minute session begins by examining foundational concepts in ethics and how they apply to emergent technologies, followed by an investigation of AI’s impact on student privacy, faculty autonomy, and community equity. Drawing on scholarly literature, institutional guides, and media resources—including work by Reed Hepler, Torrey Trust, and international ethics bodies—attendees will engage in critical analysis and collaborative reflection. Participants will also explore open access tools and frameworks that support equitable, transparent AI adoption in educational and public-facing contexts.
By the end of the session, each attendee will create the foundation of a localized AI ethical framework suitable for their institution. This includes a set of guiding principles, actionable practices, and evaluative processes. These resources will be tailored to align with institutional values, professional best practices, and pedagogical goals, and they will be adaptable for both student-facing materials and internal staff training initiatives.
Participants will come away with a customizable template for creating institutional and personal ethical frameworks.
DATE: Tuesday, July 8th, 2025, 2:00 - 3:30 pm US - Eastern Time
COST:
$129/person - includes live attendance and any-time access to the recording and the presentation slides and receiving a participation certificate. To arrange group discounts (see below), to submit a purchase order, or for any registration difficulties or questions, email admin@library20.com.