AI Has Misled Us in a Few Ways, Including One Very Simple One

How AI Has Failed Us in So Many Ways

We’ve all seen the headlines: “AI is here to revolutionize everything!” But before you hand over your content creation, major life decisions, or even nutrition advice to a fancy algorithm, consider this: even the most advanced AI isn’t perfect.

Take a simple question like “Are milk and bread a complete protein?” Google might tell you yes and no at the same time, as shown here:

In recent months, a few of the biggest names in AI generative programming and tech have disappointed in a few shocking ways. From not knowing basic logic and comprehension to Google encouraging people to eat rocks, hotly branded and marketed AI is really ringing in a new era of near hits and visionary tech disappointments.

How Big Tech Companies Have Stopped Innovating in More Ways Than One: from Search Engines to Generative AI

You will notice that Google, when asked in August of 2024, says that milk on its own is both complete and incomplete as a protein, when bread is taken into consideration. However, this is a glaring logical error. While bread on its own is not a complete protein, milk certainly is according to its verifiable complete amino acid profile, and forms the basis of most whey protein supplements (whey).

For a company traded publicly on the stock market with a market cap of nearly $2 trillion USD, you would think its advanced AI robots would at least know how to answer a very basic question, and one that does not require advanced knowledge, no less.

But it’s also why Google can’t quite give a straight answer on best practices for local business digital marketing either, so don’t hold your breath, savvy upstart business owners and entrepreneurs.

Arguing With Google’s Gemini AI About the Recent Boar’s Head Meat Processing News Story

I recently had a brief conversation with Google Gemini to get some research done on the listeria outbreak at the Boar’s Head facility in Virginia. The first response is akin to “Whoa, slow down there. The investigation is no one’s business, and I can’t comment!”

Google Gemini AI Messes Up Basic News Story

Upon being reminded the cause of the plant’s permanent shutdown was a listeria outbreak due to how they processed liverwurst, and the fact this information is already widely available on Google Search itself, Google’s Gemini says the equivalent of “My bad, you’re right.”

Google Gemini Admits to Making a Mistake and Apologizes

Twitter/X Does Not Like It When You Notice Things Either

The milk and bread dilemma presents a funny quirk, perhaps, but it highlights a bigger issue. AI can struggle with tasks that require basic logic and understanding. Remember the chatbot that went rogue on Twitter, spewing “offensive” content?

Don’t get me started on Twitter’s own AI, Grok, but space cadet Elon Musk is anything other than what he seems or claims to be, which we’ll get to in a moment.

The comment flag-and-removal team are not much better, whether they be AI or the Indian outsourced support team.

Don’t worry though, because just when you thought you lived in a free country, now some verbs are evencompletely off limits too. The word “noticing” has apparently drawn the ire of X.com/Twitter in one of my replies:

social media and its inability to honor free discussion

It’s important to remember that Elon Musk is anything but an actual right wing radical folk hero, contrary to how he’s portrayed in the media. He’s really just another placeholder/frontman for a slew of hedge funds and big elite financial backers.

Elon is the guy who gets to parade around in the spotlight, but he’s also a dandy who goes around to his various companies and hits on the women who work there, then gets them to have his baby. Talk about a golden parachute for some opportunistic folks out there.

In other words, he’s probably blackmailed beyond belief, and X is a Trojan horse designed to corral and limit truly different voices, especially those that point out the elephant in the room from time to time.

Hold on to Your Prompt: Why AI Might Not Be Ready to Write Your Next Blog Post (Just Yet)

The truth is, AI is amazing at crunching data and finding patterns, but it still lacks the common sense and reasoning ability that us humans take for granted. So, while AI can be a great tool for research and generating ideas, for tasks that require nuance, logic, and understanding your audience, a human writer with a cup of coffee (and maybe a sanity check from a friend) might still be your best bet.

The modern Silicon Valley megastar company’s public search engine has been over-engineered since the last decade, in general, with less than stellar results. Sections that try to give you the answer they think you want, rather then letting the search results offer you options, have resulted in more than one blockheaded search result that had me shaking my own in frustration. This doesn’t even factor in all their corporate cult programming enforced by oversized green haired non binaries of the non coding variety.

In the past 5 years, I’ve even used MSN or Yahoo Search just to get some better results, which never really happened when I used Google around 2008 – 2016. Please don’t tell anyone that fact, or at least that I said it. I fear the powers that be might have people who notice moved to a re education camp or digital gulag, or are we already too late there too?

All of the emphasis on controlling narratives and picking anything that Google’s very diverse minded staff determine as “misinformation” has bled over into the realm of everyday life, including simple food staples like bread and milk.

AI Companies Often Use Free Labor to Build Their Own Search Engines and AI

Google has gotten away from its core product, clearly. This has left the door open to what many people perceive as competition, but many companies like Telus International and FirstSource.com simply take advantage of the situation in more ways than one.

The rise of AI has created a new landscape for companies seeking to improve their products and services. One strategy some AI companies have adopted is to leverage the vast pool of talent on platforms like Indeed to obtain free or low-cost labor. This approach often involves posting job listings that seem legitimate but ultimately lead to tasks that contribute to the company’s AI development, without offering traditional compensation.

Here are some common tactics used by these companies:

  • Misleading job descriptions: Companies may disguise their true intentions by using vague or misleading job titles and descriptions.
  • “Training” or “testing” tasks: Once hired, individuals may be asked to complete tasks that seem like training or testing, but are actually contributing to the development of the company’s AI.
  • “Feedback” or “surveys”: Individuals may be asked to provide feedback or participate in surveys that are used to train or improve the AI.

While this approach can provide AI companies with valuable data and insights, it raises ethical concerns about exploiting individuals for free labor. It’s important to be aware of these tactics and to carefully evaluate job opportunities to ensure they align with your career goals and expectations.

Conclusion: What’s the solution to dysfunctional AI-backed search engines?

Everything about AI screams convenience, which is a primary reason for its use, but it’s come at a cost. I recently used Book Bud AI to write a non fiction book, and while it impressed with a few general topics, the information was still taken from somewhere else. Additionally, it relied on ChatGPT for its core engine, and the service was a bit awkward and did not produce a valid table of contents.

So much for automating things to make them easier?

Companies like Google should get back to their core product. Offer consumers choices when it comes to information. Instead of trying to be clever, just present a level and fair playing field and allow people to use their intuition to wade through the results. A clear lack of understanding of humanity matched with a desire to reverse historically perceived wrongs and fight “dangerous misinformation” has led to a result which, sadly, was to be expected.

But with the willingness of many people in the game to cut corners on code, outsource labor to foreign countries, and a practical monopoly on innovation (when you consider how funding works in Silicon Valley and how much total % is owned by large hedge funds), you see that it’s another story of too big to fail, but only until it’s not.