What do you meme?

It seems like that is the primary purpose for AI these days, just making memes as fast as possible without hesitation. A few years ago I just thought of AI as an inevitable force we would need to accommodate, but as with everything, it is clear we need to harness the power for good. And we are definitely taking most steps to do this a little late in the game.
To begin with, how do I use artificial intelligence? (AI)
To the best of my ability, I don’t use AI at all. At first it was a defensive method because that was the approach of most AI enabled platforms. I could just choose not to engage if I didn’t want to, and I chose not to. But the game has changed. Now, we have to turn AI off, find the switch hidden inside the program or site, and make sure we read the fine print. With the latest updates to most of the platforms and major apps I use I found it to be a daily battle to ignore or switch off the AI element in each.
Doing a Google search?
Well, you get the AI overview to begin the results.
“Would you like Gemini to summarise your unread messages?”
No, not at all.
“How can Copilot help you today?”
It can’t. I’m writing a document in a word processor.
And the list could go on and on… But those are the 3 I encounter the most throughout the day. Receiving a 1 sentence email now results in a prompt to receive an AI summary. Not all of those items have an option to turn off AI involvement in my online or personal computing life. And that is the part that frustrates me the most.
A year ago I dismissed the idea that we would be overtaken by AI in every sector because it is a beast that needs to be fed. Not just by information, but by water, electricity, and land. And as the leisurely use of it within the general public has increased, as with anything that moves from the ‘want’ category to the ‘need’, people and governments are bending over backwards to make sure the beast gets every meal it requests. We are now in a new territory of colonialism, the occupation of both physical space and resources, as well as the virtual property of the common person.
“Just say NO to new data centres in your neighbourhood?!”
“Hey look! Chat GPT can make cool memes out of my family pictures!”
You may notice at the bottom of my website I include a disclaimer against any of my material found here being fed into an AI generator for any purpose. After a few conversations about what has become common practice I realised the landscape has shifted more than I first assumed.
Email submitted for a better reading experience…
Indexed and ready for future use.
Paper submitted for grammar and punctuation…
Indexed and ready for future use.
Picture submitted for touch-ups…
Indexed and ready for future use.

Yes, I know, this has been happening in some way for quite a while. We have been training Gemini for many years by way of Google searches. Before Meta became Meta, they were forced to reveal the sale of personal information of its users for data mining. Facial recognition has been a part of all kinds of programs from surveillance to marketing long before we can imagine. It was revealed over a decade ago that personal phone calls were being monitored without permission for the sake of ‘national security’ in numerous nations. Then of course there was the suspicion that our smart phones were paying attention when we were not using them. People laughed, then became outraged, and now just ignore it.
The smartphone era is largely based on data submission and indexing for later use.
Where do you say no? Where do you draw the line? One of the ethical conversations that arises in every sector eventually is simply that of ownership. In some circles it is citation. In others, it is authorship. As some have put it, who is the artist behind the art? A regular occurrence on social media today in a thread where people are trying to verify the truth of a subject is to have a screenshot from a search engine. That screenshot is usually the AI overview, not the link to a site or the actual portion of source material that speaks to the matter. It is the indexed and repackaged information from sources, presented as a source.
The source. The AI becomes the source, not an acknowledgement of the multiple sources collected and used.
To be fair, I do not take issue with every use of AI. When used as a tool it can help the average person where they struggle, such as improving grammar and punctuation. It can help a photographer fix far more pictures in a timely manner for a project than before. There are some simple functions that can be regulated in an efficient way that could help the small business owner. I am also not naive, I realise it is in my everything right now. The coffee shop I am sitting in as I write down my thoughts uses an AI program for in the moment inventory. When I use voice-to-text I know that Google is doing more with my voice than just printing out the words.
The ethics of AI use are muddy at best. These are the most common thoughts for me right now:
Infrastructure
Are we willing to commit part of what we currently have in place for the common user? Is it good to build more infrastructure for the added strain to the system when we already have trouble with what we have? Electricity, data connectivity, all need to be in place for this to work.
Land and Water
I had trouble with this issue when cloud storage became the norm. No one really asked where data needed to go, we just really liked having no end to the number of apps and systems offering an infinite amount of space. Why delete pictures? You have the iCloud! This was presented as a ‘green’ solution, using less paper, and the convenience was a huge selling feature. The problem?
- It isn’t as green as we first believed. Electricity requires production. Storage needs space. Water is used for cooling.
- Convenience requires connectivity. That means creating more data accessibility in more places to access what we have stored.
Source material
I have already spoken to this above, but it bears repeating. Plagiarism was/is a very real problem in academia, to the point where it is normal practice to scan all material to see if anything has been lifted from the internet. We don’t call AI proofreading plagiarism but there are plenty of similarities. Where did it come from? How was it altered? What is being attributed to others who designed it?
Misrepresentation
The recent Grok case of doctored obscene images broadcast on line without the consent of the people in the images was shocking to say the least. It is a much bigger issue than just what happens with Grok on the X platform because there are plenty of image generators available online, even through the average browser. Using one person’s identity to create another is an issue for today. Safeguards are seemingly put in place as a new issue is discovered, even when we are told these things are being researched and tested in advance.
Where do we go from here?
I am not fully convinced there is an entirely ethical version of AI use. But I am convinced that we will continue to use AI more and more in the very near future. Looking at some of the recent laws put in place or being discussed for the near future, whole nations are showing their distrust for the social media brokers who have used FREE as a way to make a lot of money and gather a lot of intellectual property. This may be where my greatest angst shows itself: How comfortable are we with everything we own being owned and used without our knowledge? Not just what I created and freely submitted for the world to see, but my image and thoughts to be thrown into the digital soup pot to become part of something else.
PROCEED WITH CAUTION might be a good sign for all our devices these days.
Discover more from Eric Friesen
Subscribe to get the latest posts sent to your email.
