Is generative AI here to stay? Most signs are pointing to yes. But in what capacity? That’s for tech developers and the industry at large to decide. Still, as Silicon Valley rides this wave, businesses big and small are grabbing a surfboard and catching ripples.
Some are building their own large language models for widespread use or baking generative AI into the ethos of their products. And others? Well, they’re just saying stuff, anything really, to cash in on the buzzword du jour.
But the advent of any new technology brings bad actors who see the ignorance of new, naive investors and the potential to turn a quick profit. That’s where AI washing comes into play — businesses are falsely advertising to consumers that a product or business includes AI when it actually doesn’t. Unassuming consumers will pay the price when AI washing goes mainstream.
To stay ahead of the AI scams, officials and tech experts weigh in about the warnings and red flags to watch out for. We delve into how consumers can protect themselves and how businesses can avoid exaggerating AI claims.
Not so intelligent, but definitely artificial
Take this recent Federal Trade Commission lawsuit, for example. In August, a federal court temporarily shut down a business scheme by Automators AI (formerly Empire Ecommerce LLC) for deceiving consumers through the sale of business opportunities that purportedly used AI.
Defendants Roman Cresto, John Cresto, and Andrew Chapman allegedly schemed consumers out of $22 million, violating the Business Opportunity Rule and the FTC Act.
Roman, John, and Chapman, the FTC lawsuit alleges, promoted themselves as self-made millionaires with expertise in scaling third-party e-commerce stores through client investment. Empire’s website claims to integrate AI machine learning into its automation process, boosting revenue and bolstering business success. But it was all smoke and mirrors.
Let’s begin with Empire’s marketing material. Empire’s ads, the lawsuit states, included “lavish” claims about the profit clients would make if they were to invest in the “automated” e-commerce packages, with initial investment costing between $10,000 and $125,000 and additional costs of $15,000 to $80,000.
Also: Generative AI will far surpass what ChatGPT can do. Here’s everything on how the tech advances
The company failed to provide prospective customers with disclosure documents required under the FTC’s Business Opportunity Rule, according to the lawsuit. Most clients did not make back the promised income the company advertised and ended up losing their investments, the lawsuit states, and the e-commerce stores that Empire established and managed were suspended and eventually terminated for policy violations. Then, in November 2022, right before they sold Empire to a third-party purchaser, employees lost access to the business software systems, and John and Roman swept all data and email history from Empire’s records.
But the tomfoolery and scamming didn’t cease after the business was sold. In January 2023, the trio recycled the same marketing tactics to advertise their new venture, Automators AI. The business allegedly teaches consumers how to use AI to discover popular products on e-commerce sites and make over $10,000 in sales each month, as well as instructing consumers on how to use ChatGPT to create customer service scripts. In Automators’ social media ads, Roman creates a narrative as a rags-to-riches “leading eight-figure Amazon entrepreneur” and wealth-generation systems creator who dropped out of college at 20 and can now buy his mom a Tesla and travel around the world in his McLaren Spider sports car.
“[These scams] are not new… What is different this time is the content produced by AI can be so real,” Constellation Research VP and Principal Analyst Andy Thurai tells ZDNET. “The deep fakes and other synthetic content are almost real, it will be hard even for the experts to distinguish between real and fakes. It will be hard for the unsuspecting, uneducated, and untrained commoners.”
Buzzy like a bee
The last thing a business wants to be when a new technology emerges is left behind. But what a company or individual does to get ahead of this technology can lack vision and thematic integration at best and be misleading and fraudulent at worst.
In 2017, when Silicon Valley was set on bitcoin, Long Island Iced Tea Corp. — the company that, you guessed it, makes soft drinks — changed its name to Long Blockchain Corp., resulting in a 380% spike in its share price (that was due to insider trading, the Securities and Exchange Commission later discovered). Long Island Iced Tea Corp. jumped on the hype and told stakeholders that it would incorporate the technology into its operations, with no ties to the cryptocurrency nor expertise in anything besides iced tea.
Here’s another one: In 2015, the former associate dean and professor of MIT Sloan School of Business and his son, a Harvard Business School graduate, misled investors out of $500 million by falsely claiming that their hedge fund invested clients’ money through a “complex mathematical trading model,” essentially AI, developed by the former professor, according to the Department of Justice. The hedge fund did not.
While cryptocurrency and Blockchain technology turned out to be more or less a fad with temporary use cases, experts are betting on the lasting power of the zeitgeisty generative AI.
Also: What is ChatGPT and why does it matter?
Many notable tech companies are building their own large language models, whether it’s Microsoft’s Bing, Google’s Bard, Snapchat’s AI chatbot, or the new Meta AI chatbots. Generative AI is projected to balloon to a $1.32 billion market by 2032, according to a report by Bloomberg Intelligence.
“It’s just a new technology with a lot of promise and a lot of potential,” said Olivier Toubia, a Columbia Business School professor who researches innovation, “and no one wants to be left behind.” Definitely not Google. At its annual developer conference, the tech company uttered the word “AI” more than 140 times during its keynote, signaling to stakeholders that it takes the new tech seriously, despite its share-tanking chatbot hiccup earlier this year.
Google is by no means defrauding its customers or claiming it uses AI when it doesn’t. But just like the rest of Silicon Valley, the search engine is aware of how pivotal this moment is for generative AI, and they’re willing to do anything, even say one word over a hundred times, to make that known.
Also: 6 AI tools to super charge your work and everyday life
No ‘real meat’ in that AI-infused burger you’re eating
The Automators AI lawsuit is a classic example of AI washing, or when a company advertises messages that, as Thurai calls it, “are more of an eyewash without having real stuff behind it.”
“Many companies claim they are ‘AI-enhanced, AI-infused, AI-driven, AI-augmented, and AI whatever else.’ Most of them, if you look under the covers, don’t have any real meat behind them,” Thurai explained.
Also: Generative AI can be the assistant an underserved student needs
Unlike generative AI, which exploded within the past year thanks in part to OpenAI’s consumer-facing ChatGPT, AI is nothing new. And it’s a fairly ambiguous term, Toubia explained. “There’s a wide range of things you could label as AI or machine learning,” he said. “There’s some very simple statistical methods that have been around for over 100 years that technically could be as clever as AI.”
Given the enigmatic nature of generative AI, it’s also a complicated product to patent, audit, or regulate, which further exacerbates AI washing. “Companies don’t really have to publish or explain their AI because it’s a trade secret. There’s no pattern that you could read, and we don’t really know what’s under the hood, so to speak,” Toubia said.
Regulatory institutions like the FTC are certainly trying to control the unwieldy industry with industry-wide warnings and reports. While he appreciates the ideas behind the warnings, Thurai is doubtful that the FTC’s stern warnings and oversights will be enforced due to how difficult it will be to prove in court.
What makes generative AI so attractive to businesses is its potential for scale and its ability to automate rote tasks and speed up operations. The irony in a company falsely advertising generative AI in its business operations but failing to ever include such a thing is the fact that, even if a company ends up gaining more customers through their purported generative AI, they don’t reap any of the benefits the technology provides — more customers and a less efficient way of serving them.
How to watch out for AI washing
As companies embed generative AI into more of their operations the risk of AI washing and false advertising only grows. There are a number of questions you can ask vendors and various aspects to carefully consider before you invest thousands of dollars in an AI-augmented product.
Thurai encourages doing a deep dive demo and asking vendors about which algorithms they use, how they train their models, how they prepare data, how they monitor drift, and how they operationalize models. “Just by listening and watching a deep dive demo you will know if it is snake oil or real,” he added.
Additionally, Toubia noted another red flag to watch out for: If a company truly uses generative AI in its operations, the speed and scale of its performance should reflect this technology. If operations are slow and the purported AI tools aren’t making them any quicker the tools might not have as much AI as originally claimed.
“Suppose there’s a case in which a company claims to use AI but actually there’s a human on the other side typing the answers,” Toubia said. “That’s not going to be sustainable for the company to scale. If a company doesn’t actually have valuable AI then they probably won’t be able to demonstrate that in the market.”
Also: The ethics of generative AI: How we can harness this powerful technology
When demoing a generative AI tool, Toubia encouraged performing experiments, trying different versions, and tweaking wording or tasks to see how or if the tool’s results change.
For business owners who want the buzz without the drama, the FTC provides guidance on how businesses can keep their AI claims in check. The federal agency suggests asking key questions like whether you are exaggerating what your AI product can do or promising that your AI product does something better than a non-AI product.
Toubia says that as people wisen up to the world of AI scams and companies dial in on key generative AI use cases, consumers could become more savvy and easily avoid such schemes.
“Now, there’s always going to be people who are untrained and who are trying to catch the wave and will want to invest or be present in that space, and they will be targets for washing,” Toubia said. “That’s probably not going to go away. But hopefully, that’s going to be reduced as the market becomes more sophisticated.”