These days it seems like nearly every tech startup is touting the use of AI in their products or business processes. They release press and marketing materials advertising smart, new features that “look” like artificial intelligence, all under the guise that this rebrand will serve the end user better.
“We’re doing it to save you money …. time … problems,” they proclaim!
Unfortunately, recent data shows that companies are less than honest about their use of artificial intelligence, advertising AI product features that are really just basic automation technology features.
There is value — cash value — associated with a company’s ability to appear “tech-savvy”. The UK investment firm MMC Ventures says that startups with some type of AI component can attract as much as 50 percent more funding than other software companies. Nevermind that the Wall Street Journal suspects 40 percent or more of those companies don’t use any form of real AI at all.
“Artificial Intelligence” is the ultimate marketing buzzword. Companies can score a lot of extra press and investor attention for claiming to be an A.I. company. Community tech forums are already reeling over it, disappointed by the dishonesty that seems to be gaining traction worldwide. We think the general hype will only snowball, making the blurred lines blurrier and the gray areas…well, you guessed it.
But from the company perspective: why not risk it? The concept of A.I. for business is tough for even some experts to understand, especially non-technical CEOs. It’s rare to find technical officers who have mastered artificial intelligence, and it’s super rare to have a technologist heading a company who’d know how to check if a competing company is using fake A.I.
Princeton professor and AI ethics commentator Arvind Narayanan suggests in his findings that some companies exploit this public confusion by slapping an “AI” label on their products — a term he describes as “AI snake oil”.
Now, non-technical small business owners are forced to go toe to toe with “AI-powered” brands that are actually nothing more than algorithms and basic automation strategies. And the customer? They are meer victims of overpaying to a company that is doomed to underdeliver.
To better identify how to spot companies misleading us with their A.I. buzzwords, we figured it’s best to showcase a few ways businesses have been caught using fake A.I. in the past. We’ll also discuss why they do it, and what it means for you.
The examples below prove that some businesses subscribe to a very loose definition of artificial intelligence. Is anyone going to tell them that “human-assisted” technology is NOT real artificial intelligence?
A Chinese voice recognition software company boasted bold AI claims at an international technology forum in 2018. Here, they demonstrated how their automated voice software could conduct real-time language interpretations from English into Chinese in an audience presentation.
The company scored millions in funding by promising to deliver next-generation smart speech technology, that is until a previous employer claimed to be one of the human “interpreters” tasked with simultaneously transcribing his speeches behind the screen.
Another UK automated speech recognition company, now defunct, was accused by the BBC for secretly stashing humans at far away call centers who manually converted voicemails into text while the company raised enough funds to go fully automated. Unfortunately, the undercover outsourcing was in direct violation of the UK’s data privacy laws and the company’s PR blunder ruined their chance to live out their A.I. dreams.
The Wall Street Journal investigated countless instances of “exaggerated tech-savvy” common in A.I. startups in August 2019. According to one report, an Indian-based app development platform backed by SoftBank was not, in fact, using A.I. to build apps as advertised.
Their claim stating that users can make up to 80 percent of a mobile app from scratch in about an hour secured nearly $30 million in funding, despite the fact that the company employed human engineers in India to assemble the code, not automated intelligence features.
Today’s artificial intelligence personal assistants sound just like a real person … perhaps it’s because they sometimes are. Bloomberg uncovered multiple instances that suggest the mild-mannered helpers designed to schedule your meetings and order your food aren’t entirely robotic. Former employees of multiple personal assistant software companies say that there is a human behind most automated actions.
For example, a large percentage of emails sent through a company’s scheduling service pass by humans, not a machine. One employee was paid a reasonable salary to sit in front of a computer for 12 hours a day, clicking and highlighting phrases entering the system in an effort to help the computer “learn” and formulate a proper response. Disgruntled employees from another popular A.I. powered product called the technology “smoke and mirrors if anything”.
Many of the companies using fake A.I. justify their actions with the same logic: The technology that makes this work (machine learning) needs vast amounts of “information” to get it right. Humans are needed (at least in initial stages) to improve and correct software when it gets stuff wrong. This requires human eyes and ears to review and annotate data so engineers can use it to fine-tune algorithms on the back end.
It’s important to point out that most companies don’t want to deceit or evade at first. The decision to lie is often a byproduct of when their A.I. cannot be solved before the company runs out of money.
It’s the ultimate “fake it until you make it” scheme. For example, a company’s chatbot may be “dumb” in its beginning stages, but the illusion of A.I. makes a subpar product more marketable.
The alternative — removing A.I. — is hard to justify because omitting it hurts marketing buzz and sales.
It all comes down to MONEY.
An article from The Guardian describing the rise of pseudo-AI says it best, “it’s cheaper and easier to get humans to behave like robots than it is to get machines to behave like humans.”
Gregory Koberger, CEO of ReadMe, says, “using a human to do the job lets you skip over a load of technical and business development challenges. It doesn’t scale, obviously, but it allows you to build something and skip the hard part early on.”
Another small yet important detail investors love about the notion of artificial intelligence? People tend to disclose more when they think they are talking to a machine, rather than a person. And any marketer worth their weight in gold understands the value of information, especially when it is information about a customer.
If the examples above teach us anything, it is that we must remain vigilant in our assessment of companies and their perceived technical prowess.
Professor Narayanan reminds us to engage our natural common sense abilities — a quality only humans possess — when scrutinizing flawed AI claims and to be wary of those using AI to replace efforts of reason and judgment.
For example, the professor’s snake oil study specifically highlights companies that leverage AI to automate judgment (IE: copyright violation or automated essay grading) as “fundamentally dubious” in their AI efforts. He’d also question companies using AI products to predict social outcomes (IE: job success or at-risk youth) as suspicious because the potential for error and inaccuracy in decision making is far too great when relying on computers only.
When it comes to how to spot companies using fake A.I., pay attention to:
A complete and total artificial intelligence rebrand should be cause for caution. Beware of companies advertising full A.I. because it seems we’re not quite there yet. Without human intelligence, a chatbot, an app development program, or a personal assistant remains quite simply “dumb”.
If you are unsure about a company’s artificial intelligence claims, remember this: humans are still required to help A.I. improve, even when companies are unwilling to admit it.
All that’s at stake in the A.I. revolution means that some aren’t always transparent with customers when another person is involved in the process. Still, transparency is key because it builds trust with your customers and brand. Don’t be afraid to ask a company for testimonials, demos, or product details to help you uncover the truth.
Here at ChipBot, we publish monthly product updates, blogs, and videos to help our users stay informed always. See what ChipBot is doing to tackle the automation trend as it relates to the customer experience.
Lauren Hamer is a Digital Content Writer and Copy Editor who helps top companies and small start-ups to define their brand through quality, conversational content.