AI Feasibility (and How Open AI Tools and Workflows Affect It)
Determining if using AI is worth the time, effort, and (possible) financial expense
NOTE: Anyone who knows me will know that I have absolutely no filter when it comes to writing or saying what I think. I write things that are in their embryonic stages, and these might not be perfect representations of my ideas. I do not mean to offend anyone, and these posts are not meant to cast judgment. I am just thinking through typing. If I critique anyone through these posts, it is mainly myself. As evidenced by this post, especially, much of the content I come up for this blog comes from my own pedagogical and professional missteps.
Educators, librarians, politicians, florists, and bookstore owners have all probably been impacted by generative AI tools at this point. At the very least, most people have probably tried to use a generative AI tool like Groq or ChatGPT or Ideogram at least once.
For some of us who are early adopters, and others who are enthusiastic late adopters, the use of generative AI tools quickly became a habit that affected much if not all of our projects. We knew that AI was going to be a pervasive influence for students and patrons (or whoever our target audience was), and so we set out to understand it.
Interestingly for me, I noticed that within a month I began to use ChatGPT for almost everything. Even the most basic tasks were offloaded to ChatGPT because “it can do it, so why shouldn’t I use it?” In other words, I was quickly becoming AI-dependent (even if my skills were not atrophying just yet).
As I considered my use practices and the patterns of my interactions with ChatGPT and other tools for multiple formats, I realized that I needed to make some changes. For one thing, I created structures and ensured that I would think through prompting. For instance, I developed a conscientious pattern of identifying objectives and goals to fulfill rather than information I wanted to understand or text I wanted it to write. Another commitment was to provide the tool with the data or information or argument that I wanted it to communicate. I would not let ChatGPT do any critical thinking for me.
The other commitment, and one that I did not understand as important at the time, was that I would not use ChatGPT if I knew that I could do the task in a realistic amount of time by myself. At most, I would use the tool to edit or review the product I created. This is why the vast majority of this blog is manually written and ideated. I want to be authentic.
As time went on, I saw how these commitments were all related to each other. I saw that time, effort, skill-atrophy, and financial costs needed to be considered at the same time as I thought of potential benefits. From this thought process came the idea of “AI Feasibility.” Another term that one might use to replace “feasibility” is “necessity.”
What is AI Feasibility?
AI feasibility, as explained in my (completely authoritative and incontrovertible) AI Literacy Framework, is the mental consideration of factors that leads one to decide if an AI tool really needs to be used, or if a human can do it faster and more efficiently.
A (Working) Framework for AI Literacy
NOTE: This is by no means a perfect framework, but it is a list of the central skills, concepts, and ideas that I feel are necessary for one to be considered “AI-literat…
The moment after I released this framework, I immediately saw three or four frameworks that were much more organized and well-presented. They didn’t include all of my facets, but they did cover most of them. Several included aspects that I did not think of in my framework.
, one of the commenters on the original post, mentioned Dr. Regan Gurung’s work with AI. At a recent conference hosted by the University of Missouri-Columbia, Dr. Gurung mentioned how he determines whether or not to use AI: the FEAL Framework:Faster - Can AI help us work faster according to best practices and with our skills in mind, or would using our skills to do this manually be the more efficient approach?
Ethical - What would be the most responsible and ethical workflow to use an AI tool(s) for this task?
Accurate - How accurate would the AI tool be if performing this task on its own? (and I would add, What must I do to ensure that the tool output and finished product are accurate?)
Learning - Does using AI for this task help or hinder my learning new skills or subjects?
When we go through these processes, we realize how prone we are to outsource and offload our tasks to technology. If we use metacognition while going through this process, we can understand how our minds think that way. Initially, at least in my mind, I was giving AI the tasks that I did not want to do (and there are hazards to this. It’s good to do hard things). Eventually, though, one can find oneself offloading the central parts of one’s duties and shirking their responsibility to produce high-quality products.
The Financial Aspect of AI Feasibility
I will have to admit that this aspect of AI feasibility was one of the last to cross my mind. For someone in the lower middle-class, my wife and I were surprisingly willing to pay for a subscription to ChatGPT Plus (although that is the only subscription related to AI that we will pay). The abilities given to subscribers were more than enough to convince us to subscribe.
For other users, this is not the case. There are many financial reasons that people cannot pay to use AI tools. For this reason, I created an open access prompt engineering guide and list of free and open-access generative AI tools.
The Commercialization of EdTech
Recently, I saw another blog post that was discussing the difference between free and commercial EdTech products. The poster was asserting that all free edtech tools, AI or not, were essentially trapping educators with false promises and pedagogies that would become useless if the tool became obsolete (much like Flipgrid this week). They focused on the lack of support in traditionally free tools and stated that relying on free tools is “a bad model” for teaching. And for most intents and purposes, they are correct.
However, then the poster stated categorically that all free tools were not sustainable. They connected the general concept of “free” tools with cunning EdTech companies that lured educators in with “free” tools to trap them in subscriptions and get more users. Essentially, they were anti-free anything, because in their mind anything free was just a ploy to grow a user base of which they can eventually profit.
There are more hazards than the original poster discussed, one of which is the potential harvesting of user data and trends to optimize advertising (which concept is responsible for the phrase “if something is free, you are the product”). Data privacy and confidentiality is especially important when considering generative AI tools, because the data you put or imply in your prompts can never be removed from the data harvested by the tool. Fortunately, there are solutions to much of these issues with Open genAI tools.
How Open AI (not OpenAI) Promotes AI Feasibility
Openly available AI tools can provide many of the benefits of AI-supported educational technology while not entrapping educators and their students in detrimental situations or agreements. They may not be flashy and have all of the commercial capabilities, but they have many of the basic functionalities.
What Does “Open” Mean?
Well, if you want to understand that I have an excellent blog to which I can refer you. David Wiley has been talking about this for decades. In fact, he has been so widely-quoted and respected in this field that on my first day in my first professional job, I was pointed to his blog no less than five times by independent external sources while learning about Open Educational Resources. As I have been a student in two of his courses now, attended multiple webinars run by him, have met him in person, and talk with him with intermittent regularity on Discord, he has become a mentor to me. It is largely due to his influence that I have become steeped in the impact of “open” on the information and technology ecosystems.
“Open” in the case of educational technology means that a source, material (or tool, in the context of AI) is in the public domain or is given an open license: CC BY, CC BY SA, CC BY NC, or CC BY NC SA. These assets have the following 5 abilities, called the 5 Rs:
Retain – make, own, and control a copy of the resource (e.g., download and keep your own copy)
Revise – edit, adapt, and modify your copy of the resource (e.g., translate into another language)
Remix – combine your original or revised copy of the resource with other existing material to create something new (e.g., make a mashup)
Reuse – use your original, revised, or remixed copy of the resource publicly (e.g., on a website, in a presentation, in a class)
Redistribute – share copies of your original, revised, or remixed copy of the resource with others (e.g., post a copy online or give one to a friend)
The 5Rs description above came directly from David Wiley’s blog, from a page entitled “Defining the ‘Open’ in Open Content and Open Educational Resources.” This is licensed under the CC BY 4.0 International License.
“Open” for our purposes in this article is different than “any open access materials.” “Open access” (strictly access only) adds two more licenses to this group: CC BY ND and CC BY NC ND. These materials are only accessible and cannot be changed or altered or updated. We will not discuss these in this article.
I have my own opinions on the purpose and nature of “open” materials. That will be for another post (and probably not on this blog, as it is out of the scope). Suffice it to say that “open” things are created to facilitate the spread of information and skills. They are not meant to be commercial or lead to commercial opportunities. The only commercial activity that one could facilitate with these assets would be to teach with them for profit (ethical under certain circumstances, depending on objectives) or compile them and sell access to this curated collection (I have opinions on this, but I won’t get into that here. Message me on Discord.).
“Open” AI Tools and Workflows vs. “Free”
“That ‘s all very well and good, Reed,” you say, “but what difference does this definition make?”
Well, like the original poster above stated, many free tools can become paid or they can fall apart because of lack of support. This puts educators in a bad position. But open access and OER tools will not put educators in that position because the users take responsibility for their use and upkeep of these assets. Furthermore, these tools are based on functionalities and capabilities. They are not burdened by all of the attractive features of commercial tools.
This is why I prefer workflows, as I discussed in my last blog post. I define this term as “working through a multi-step process with a multimodal tool or multiple tools.” It doesn’t matter that I can’t do everything all at once. If something is free and openly available, I do not mind using a few seconds to transfer from one tool to another (or in the case of GPT4o, to call another Custom GPT).
The Danger of Complete AI Automation
What Do We Mean When We Talk About AI Automation? One of the most-commonly associated terms with generative AI tools, especially on LinkedIn and in professional-centric groups, is “automation.” “Learn how to Automate Your Work with AI.” “I saved 80 percent of my time by using this AI-based scheduler!”
If you do not want technology companies exploiting and controlling teaching pedagogy, then the simple solution is to not let the companies control what you do and how you use tools. This is related to another concept taught to me by another mentor, Dr. John Curry. I have also taken several courses from him. In one of our first conversations, he told me of an axiom that is common in the field of instructional technology: “Do not use technology for technology’s sake.”
In other words, educators are responsible for making sure that the technology they use, promote, and teach students to adopt will actually improve the skills and increase the knowledge that are essential for the course. Many of the commercial tools (and some open-source ones, to be fair) are created and adopted in contravention of this axiom. They are created to fill some sort of need or problem, and to convince users that they are the only (or best) solution. When their solution fails or they raise (or introduce) prices, anyone (educators or otherwise) who exclusively relied upon their product is left bereft and searching for the next flashy solution.
The best open tools and workflows, like OERs, are generally created with one thing in mind: to foster the spread of information, skills, and abilities. They are not meant for financial profit. They are meant for well-tuned pedagogies that promote learning according to objectives. If they break down, educators who have used best-practices-informed pedagogy will not feel abandoned because they will know the skills necessary to communicate and teach without those technologies. Perhaps if educators remembered the reason for introducing technology in the classroom, they would not be so dependent on commercial tools, even if they are free. They would be more open to… using open tools and open workflows.
For information about open-source LLMs and the definition of “Open AI,” read David Wiley’s recent blog post entitled “Toward a Definition of Open Source AI.” Open-source means that the code and development of the tool are also open, meaning that anyone can take the data and processes and create their own tool for any purpose. Specifically, developers have the ability to:
Use the system for any purpose and without having to ask for permission.
Study how the system works and inspect its components.
Modify the system for any purpose, including to change its output.
Share the system for others to use with or without modifications, for any purpose.
Putting it Together (like Sondheim)
Here’s my reference to Sunday, again, because I love it so much.
Much like a Chromalume, my posts, presentations, and arguments, take disparate pieces and put them together in a combination that may only make sense to me. The good thing about a blog is that I can write it down and then you can read this mess over and over again until it makes sense to you (or until you write me a message about why I’m wrong, either way).
My point for this post is this:
We need to seriously consider why we are adopting AI tools, in whatever work we are engaged.
Dr. Gurung’s FEAL framework is an excellent way to analyze most of the factors of AI Feasibility
When it comes to financial (and data) aspects of AI Feasibility, do not discount open (or high-ability free-tier) AI tools and workflows just because they are not flashy. Opt for using the most important base functionality instead of going commercial just because of flashy but less-important capabilities.
That is the state of the art.
Reed, you approach this in a way that is much more responsible and healthier than most. Given the work that I do, there are additional questions I want to ask anybody who's talking about using open artificial intelligence tools at this point. What are you using? Where is it hosted, or who is hosting it? Has the institution vetted it for accessibility, privacy, and security? Who's providing support?
These are the questions that don't come up often enough in blog posts about using AI in higher education. Given what I do for a living, they are vital questions. In some cases, with other open educational technologies and resources, there are plenty of organizations that provide support and can answer privacy, security, and accessibility questions. For others, there are not. If your institution is hosting the AI or has approved it, that is one thing. If not, there are a whole host of risks, including legal risks to the institution, that can come up. Also, while a lot of faculty can work through the set up and use AI tools without support, there are many who cannot or do not have the time. For them, either institutional or vendor support is often needed.
Those are the kinds of things that I think about when discussions like this come up. That's before we get into all the other ethical implications of AI, how a given model is trained, what is trained on, biases it has, and a few other things. In many ways, this is like the early days of other technologies we use for education, but in other ways, it is much more complex, and that complexity may mean that we have to approach much differently.