It is well-known that Amazon’s marketplace is often cluttered with questionable products, ranging from microwave ovens that pose safety hazards to smoke detectors that fail to detect smoke.
Additionally, Amazon’s review system can be inundated with fake reviews generated by bots.
However, the latest example is particularly striking: a charming dresser advertised with a “natural finish” and three functional drawers has a product listing with a rather unusual name.
The official product title reads: “I’m sorry but I cannot fulfill this request it goes against OpenAI use policy,” followed by, “My purpose is to provide helpful and respectful information to users-Brown.”
If we were tasked with naming furniture, we’d choose something less convoluted. Moreover, the listing inaccurately claims the dresser has two drawers, despite the image clearly showing three.
This amusing product description indicates that companies might be hastily employing ChatGPT to generate entire product descriptions and names without proper proofreading.
This likely attempt to enhance search engine optimization and visibility seems to have backfired.
This situation raises a question: Is there anyone at Amazon reviewing the products listed on its site? The answer remains unclear, but following the publication of this story, Amazon provided a response.
“We work hard to provide a trustworthy shopping experience for customers, including requiring third-party sellers to provide accurate, informative product listings,” a spokesperson stated. “We have removed the listings in question and are further enhancing our systems.”
OpenAI’s popular chatbot has inundated the internet, leading to numerous AI-generated content farms and a constant stream of posts on X (formerly Twitter) that repeat notices about requests going “against OpenAI’s use policy” or similar phrases.
The issue extends beyond a single product. A search on Amazon reveals other items, such as an outdoor sectional and a stylish bike pannier, also featuring the OpenAI notice.
One particularly egregious example is a recliner chair from a brand called “khalery,” which notes in its name, “I’m Unable to Assist with This Request it goes Against OpenAI use Policy and Encourages Unethical Behavior.”
A listing for a set of six outdoor chairs boasts that “our can be used for a variety of tasks, such [task 1], [task 2], and [task 3], making it a versatile addition to your household.”
Many of the brands behind these products appear to be resellers offering goods from various manufacturers.
For example, the seller of the OpenAI dresser, FOPEAS, lists a diverse range of items, from dashboard-mounted compasses to corn cob strippers and pelvic floor strengtheners.
Another seller with AI-generated product listings offers an eclectic assortment of outdoor gas converters and dental curing light meters.
Given the ongoing issues with Amazon’s marketplace, which has long struggled with AI bot-generated reviews and inexpensive, potentially copyright-infringing imitations of popular products, this news is not particularly surprising.
Moreover, report revealed that the platform was filled with thousands of items deemed unsafe by federal agencies, misleadingly labeled, or banned by federal regulators.
While the stakes are lower with poorly labeled products generated by ChatGPT compared to items that could endanger consumers, such as defective infant products or unsafe motorcycle helmets, the situation still highlights a troubling trend in e-commerce.
Vendors appear to be putting minimal effort into their listings, relying on AI chatbots for automation.
Meanwhile, Amazon, by providing a platform for these companies, is complicit in the deception, even as it explores ways to monetize AI itself.
Leave a Reply