Fake reviews are significantly undermining the web, but there is some new hope in combating them.
On Friday, the Federal Trade Commission (FTC) proposed new regulations targeting businesses involved in buying, selling, and manipulating online reviews.
If these rules are enacted, violators could face substantial fines—up to $50,000 for each fake review, each time a consumer sees it.
These proposed rules represent the federal government’s most substantial effort to curb the market for fake reviews.
However, they don’t address accountability for major review platforms like Yelp, Google, Tripadvisor, and Amazon directly. (Amazon founder Jeff Bezos owns The Washington Post, and interim CEO Patty Stonesifer is on Amazon’s board.)
You’ve likely encountered situations where a product is inundated with fake five-star reviews.
Sometimes, merchants even offer to pay for positive reviews. This type of deception undermines our collective power as consumers. (Have you been affected by fake reviews? Feel free to email me.)
Samuel Levine, director of the FTC’s Bureau of Consumer Protection, says, “Anyone who’s done any shopping online knows that obtaining objective information about a product is challenging due to the abundance of commercial misinformation and deceptive reviews.”
Consumer advocacy groups and researchers like U.S. PIRG estimate that 30 percent to 40 percent of online reviews are fake or otherwise not genuine, though this rate can vary depending on the product and website.
There are entire businesses dedicated to creating fake reviews for scammers and merchants looking for shortcuts. This issue is likely to escalate with the advent of artificial intelligence like ChatGPT, which can produce highly convincing humanlike text.
Historically, the federal government has tackled this issue through individual lawsuits, but the FTC’s new rules aim to address it more systematically.
The FTC considers fake reviews illegal because they mislead consumers. The proposed rules, which will be open for two months of public comment before finalization, would clarify responsibilities and enhance the FTC’s ability to take action.
The new rules prohibit several practices: misleading reviews about personal experiences, fake reviews from non-existent individuals, and reviews written by insiders without proper disclosure.
These rules will target not only those who write fake reviews but also the intermediaries who procure them and the companies that pay for them, provided they knew or should have known the reviews were fake.
However, there are some exceptions. Businesses can still request reviews from real customers, which is crucial for building an online reputation.
Giving legitimate customers a gift card for leaving a review is also allowed, as long as the review isn’t contingent on expressing a particular opinion, though disclosure of significant incentives is recommended.
The rules also address shady practices such as “review hijacking,” where a merchant replaces a product listing with a different product that customers never used. Earlier this year, the FTC fined a supplement maker $600,000 for this practice on Amazon.
Additionally, businesses can’t run websites that falsely claim to host independent reviews while secretly promoting their own products. They also can’t suppress negative reviews through intimidation or legal threats.
“It’s crucial to deter these practices upfront, so businesses know they could face substantial penalties,” says Levine.
Besides the $50,000 fine per fake review, the FTC would also have the authority to recover money for consumers harmed by these fraudulent reviews.
The FTC plans to use the new rules to streamline enforcement, though it will not receive additional resources for this purpose. Enforcement could also be challenging for businesses located overseas in countries that do not collaborate with the FTC.
Many consumer advocates argue that addressing the entire fake-review economy is necessary for a comprehensive solution.
Social media platforms like Facebook and Twitter are easy venues for recruiting fake-review writers. While Facebook has removed some fake review groups and Amazon has sued leaders of numerous fake-review groups, the problem persists.
Review platforms and retailers such as Yelp, Google, and Amazon have significant control over what reviews are published and profit from them.
However, the FTC’s rules do not extend liability to these platforms unless they are directly involved in procuring fake reviews. There is also no requirement for sites to verify users’ identities or ensure they have used the products.
“Many of them assert immunity under Section 230 of the Communications Decency Act,” Levine notes, referring to the law that shields online platforms from responsibility for user-generated content. This makes it challenging for the FTC to hold these platforms accountable.
Despite claims of taking the issue seriously—Amazon blocked over 200 million suspected fake reviews in 2022, and Yelp flagged 19 percent of reviews as “not recommended”—there is still a significant problem.
“On any given day, I can find thousands of fakes myself without automation. That’s just one person,” says Kay Dean, who runs Fake Review Watch. “There’s little incentive for self-policing, and no real repercussions.”
One suggestion is for review sites to be more transparent about removing fake reviews so that consumers and investigators can better track these activities.
“We need more data and transparency about why review content is displayed,” says Saoud Khalifah, founder of Fakespot, which uses AI to detect fake reviews.
The major review sites are running out of excuses. “Regardless of the liability regime, it is in the interests of consumers and businesses using these platforms for them to improve their policing of this issue.
They have the best visibility into what’s happening and are often in the best position to address it,” says Levine.
Leave a Reply