* Edits have been made to the automatically translated article.
“Your hands are stained with blood'' — At a hearing held in the U.S. Senate Judiciary Committee in late January, a Republican lawmaker said that executives at five major social media companies, including Mark Zuckerberg, He criticized the top.
The committee discussed the dangers children face online. Amid a flood of content that promotes sex offenders and biased beauty, top executives are being rigorously investigated for their responsibility to protect children.
Surprisingly, this accusation is supported by a wealth of evidence.
Shortly before the hearing, a deepfake-generated sexually explicit image of Taylor Swift went viral. I felt a sense of horror as these incidents occurred, spread, and continued to spread even after being labeled as fake.
This year, when the US presidential election is being held, there is a risk that deepfakes that attempt to skew political discussions and election results will become widespread. Some of this harmful and dangerous misinformation can harm society in the form of advertising.
The vast amount of user-generated content (UGC) that can be uploaded for free to open platforms is extremely difficult to screen in advance. However, the quality of advertising media differs. Even if we can't remove or modify what's already being shown to us, there's definitely more we can do to improve the quality of online advertising.
Just like any other media, all advertisements must be vetted by experts before being displayed. This will definitely reduce harmful advertisements.
Companies and individuals pay for advertising space. So why not create stricter oversight by raising prices, cutting profits, or creating new business models?
The automatic ad review systems that major tech companies have introduced using AI and machine learning are far more intelligent than I can even fathom, and are even astonishing. But identifying a lot of bad ads doesn't catch all of them (it's already been proven), and it won't always do so.
Therefore, there are only two options for us now. Build a monitoring system like Clearcast (the non-governmental organization that pre-approves almost all TV advertising in the UK)? Or accept platforms that rely on automated review systems that also display illegal and misleading ads. In the latter case, secondary damage must also be considered.
Establishing a proper monitoring system will have an impact on the business models and profits of companies that currently have automated screening systems in place.
But acting more responsibly shouldn't drive big tech companies, which make huge profits, into bankruptcy. For them, it's a cost, but it can also be a benefit to society and an opportunity to improve their reputation (and perhaps even the advertising itself, given the lack of credibility in our industry). .
To be frank, cost shouldn't be an issue in this case. It is natural that moral standards come at a cost. If cost is an issue, it indicates a flawed business model. Corporations that repeatedly (and intentionally) harm society should not be allowed the right to pursue profits.
Big tech will counter this. We do not neglect to examine advertisements. They are making significant investments in technologies such as AI and machine learning to automate the review process. The company also employs people to handle difficult-to-identify advertisements. Ads that are judged to not meet the standards are deleted…etc.
They will also say that there are simply too many ads to handle manually. Everything happens in real time, so advertisers should be able to fine-tune their campaigns and productions. So much is happening so fast that the only solution is automation.
If our industry wants to eradicate harmful and illegal advertising, we must change the system before ads are even displayed. Removals have so far only been carried out after some damage has been done.
Considering the business model, how much collateral damage is acceptable? If the business model needs to be corrected, when will it be recognized? Where should we draw the line between business responsibility and liability?
Automation brings many benefits to people's lives. A good example of this is the use of robots in surgeries that require delicate techniques. However, in cases where delicate judgments and interpretations are required, cases where there is a possibility of crime or social harm occurring, and cases where large sums of money are being traded, the wisdom of experienced people is essential.
A thorough ad review and a convenient but flawed ad review. Their combination is impossible.
Lindsay Clay is the CEO of Thinkbox, a British television marketing company.