Alarmed by the volume of disinformation that was assumed to have taken place
during the 2016 US elections, scholars, politics and journalists predicted the
worst when the first deepfakes began to emerge in 2018. After all, US Elections
2020 were believed to be the most secure in American history. This paper seeks
explanations for an apparent contradiction: we believe that it was precisely
the multiplication and conjugation of different types of warnings and fears
that created the conditions that prevented malicious political deepfakes from
affecting the 2020 US elections. From these warnings, we identified four
factors (more active role of social networks, new laws, difficulties in
accessing Artificial Intelligence and better awareness of society). But while
this formula has proven to be effective in the case of the United States, 2020,
it is not correct to assume that it can be repeated in other political
contexts.
Deepfakes and the 2020 US elections: what (did not) happen João Paulo Meneses, CECS, Portugal D011454@ismai.pt Abstract Alarmed by the volume of disinformation that was assumed to have taken place during the 2016 US elections, scholars, politics and journalists predicted the worst when the first deepfakes began to emerge in 2018.
deepfakes and the 2020 us election: what (did not) happen joão paulo meneses, cecs, portugal d011454@ismai.pt アメリカにおける2016年の選挙中に起きたと仮定された偽情報の量によって、2018年に最初のディープフェイクが発生し始めたとき、学者、政治、ジャーナリストは最悪の事態を予測した。
0.72
After all, US Elections 2020 were believed to be the most secure in American history.
結局のところ、2020年のアメリカ大統領選挙は、アメリカ史上最も安全だと信じられていた。
0.63
This paper seeks explanations for an apparent contradiction: we believe that it was precisely the multiplication and conjugation of different types of warnings and fears that created the conditions that prevented malicious political deepfakes from affecting the 2020 US elections.
From these warnings, we identified four factors (more active role of social networks, new laws, difficulties in accessing Artificial Intelligence and better awareness of society).
But while this formula has proven to be effective in the case of the United States, 2020, it is not correct to assume that it can be repeated in other political contexts.
Keywords: Deepfakes, Social Networks, Disinformation, US Elections João Paulo Meneses C..EC.S., Portugal In E-mail: d011454@ismai.pt ORCID: 0000-0003-2365-3832
キーワード:Deepfakes, Social Networks, Disinformation, US Elections João Paulo Meneses C..EC.S., Portugal In E-mail: d011454@ismai.pt ORCID: 0000-0003-2365-3832
0.77
英語(論文から抽出)
日本語訳
スコア
the a that is
はあ? あ あれ は
0.57
capable of and
能力 ですから そして
0.63
report (‘Understanding the US Elections 20, Nisos1 published a
報告 (「下」) nisos1が公表した米国選挙20選
0.72
Introduction One year before illicit economy for synthetic media’2) that read ‘we do not anticipate widespread deepfake3 use in disinformation campaigns in the near term (to include the 2020 election cycle)’.
In retrospect, Nisos experts made the right forecast.
振り返ると、Nisosの専門家は正しい予測をした。
0.56
However, this was a clear minority opinion.
しかし、これは明快な少数意見であった。
0.66
Before and after their report, dozens of politicians and institutions drew considerable attention to the approaching danger: ‘imagine a scenario where, on the eve of next year’s presidential election, the Democratic nominee appears in a video where he or she endorses President Trump.
Now, imagine it the other way around.’ (Sprangler, 2019).
では、その逆を想像してみよう(Sprangler, 2019)。
0.67
It is fair to say that deepfakes’ high potential for disinformation was noticed long before these hypothetical consequences were evoked, mainly because to be highly credible.
Two examples: ‘In an online quiz, 49 percent of people who visited our site said they incorrectly believed Nixon’s synthetically altered face was real and 65 percent thought his voice was real’ (Panetta et al, 2020), or ‘Two-thirds of participants believed that one day it would be impossible to discern a real video from a fake one.
オンラインクイズでは、我々のサイトを訪れた人々の49%が、ニクソンの合成された顔が本物で、65%が彼の声が本物だと誤って信じている、と答えている(Panetta et al, 2020)。 訳抜け防止モード: オンラインクイズでは、サイトを訪れた人の49%が言いました ニクソンの人工的な顔は本物だと誤って信じていた そして65%は、彼の声は本物だと思った(panetta et al, 2020)。 参加者の3分の2は いつか、本物のビデオと偽のビデオを見分けることは不可能になるだろう。
0.71
42 percent of people believed it is very or extremely likely that deepfakes will be used to mislead voters in 2020’ (AMEinfo, 2020).
However, perhaps the most frightening factor to anyone in the consequences of disinformation was the predictable difficulty in combating or neutralizing malicious deepfakes, since are evolving: Artificial Intelligence (AI).
The same expert, Hany Farid, a professor from University of California, Berkeley, stated in other source that ‘[i]n January 2019, deepfakes were buggy and flickery.
Nine months later, I’ve never seen anything like how fast they’re going.
9ヶ月も経たないうちに、彼らがどれだけ速く行くかなんて、見たことがない。
0.67
This is the tip of the iceberg’ (Toews, 2020).
これは氷山の一角だ(Toews, 2020)。
0.59
Many factors make this technology something special and probably nothing compared to what (2020) state: was known until now, as Katarya and Lal ‘The existence of such open-source software and the availability of devices in the market for fabricating and propagating these falsified information has brought to attention the immediate need for detection and elimination of malicious deepfake content’.
built on they 1.1 Warnings about the 2020 US elections
建立 オン 彼らは 1.1 米国選挙に関する警告
0.65
The dangers that deepfakes can pose for an election were widely anticipated prior to the 2020 US elections.
ディープフェイクが選挙に影響を及ぼす危険性は、2020年の米大統領選に先立って広く予想されていた。
0.51
Although the weight of the two technologies is different, the basic fear was that in 2016.4 Consequently, there were deepfakes could play the same role that fake news did warnings the sectors potential of deepfakes to cause ‘with so few people undecided about the upcoming presidential election, influencing just a handful of people on the
Much more because technologists, and academics) about
それ以上に 技術者や学者)
0.56
from various 1 https://www.nisos.co m/company 2 https://www.nisos.co m/deep_fakes 3 In our definition, a deepfake is completely or partially fake content in video, audio, text and/or image form that was generated using Artificial Intelligence.
Thus, deepfakes are not limited to their most popular form, namely videos 4 In summary, the Mueller Report concluded that Russia’s Internet Research Agency and military intelligence service (GRU) used a range of digital tactics to target the 2016 US elections.
したがって、ディープフェイクは彼らの最も人気のある形式、すなわちビデオ4に限定されない。要約すると、muellerのレポートは、ロシアのインターネット研究機関と軍事情報局(gru)は、2016年のアメリカ大統領選挙を狙うために、さまざまなデジタル戦術を使ったと結論づけている。 訳抜け防止モード: したがって、ディープフェイクは最も人気のある形態に限ったものではない。 まとめると、Mueller ReportはロシアのInternet Research Agencyを結論づけた。 軍事情報部(GRU)はデジタル戦術を多用した 2016年アメリカ合衆国大統領選挙に立候補。
0.68
英語(論文から抽出)
日本語訳
スコア
margins can sway an election’ examples of this potential:
マージンは選挙のこの可能性の例を妨げかねない。
0.55
(Polakovic, 2020).
(polakovic, 2020)。
0.76
The following list presents some
以下 List presents some ♪
0.55
relevant - - - a
関連 - - - あ
0.73
his opening remarks,
彼の オープニング 言っとくよ
0.54
Schiff warned of
schiff氏は警告する
0.62
committee Chair Adam in 2019 that the technology could
アダム委員会委員長 2019年にテクノロジーが
0.59
‘In the old days, if you wanted to threaten the United States, you needed 10 aircraft carriers, and nuclear weapons, and long-range missiles.
Today (…) all you need is the ability to produce a very realistic fake video that could undermine our elections, that could throw our country into tremendous crisis internally and weaken us deeply,’stated US Senator Marco Rubio (Porup, 2019).
‘In ‘nightmarish’ scenario for the upcoming presidential campaigns and declared that ‘now is the time for social media companies to put in place policies to protect users from misinformation, not in 2021 after viral deepfakes have polluted the 2020 elections’‘ (Galtson, 2020).
Senator Ben Sasse, a Republican from Nebraska who introduced a bill to criminalize the malicious creation of deepfakes, warned ‘destroy human lives’, ‘roil financial markets’, and even ‘spur military conflicts around the world’ (Wolfgang, 2018).
(…) Deepfakes could also be used to create entirely new fictitious content, including controversial or hateful statements with the intention of playing upon political divisions, or even inciting violence’, wrote researchers Puutio and Timis (2020).
ディープフェイク(Deepfakes)は、政治的分裂や暴力の扇動を意図した論争的あるいは憎悪的な発言を含む、全く新しい架空のコンテンツを作るのにも使える、と研究者のPuutio and Timis (2020)は書いている。
0.64
‘Disinformation conveyed via deepfakes could pose a challenge during elections, since, to the untrained eye, a deepfake may be difficult to distinguish from a real video.
Any political actor could try to discredit an opponent or try to incite some political scandal with the goal of furthering their own agenda’ (Dobber et al, 2020: 2).
あらゆる政治俳優は、相手を軽蔑したり、自分の議題をさらに進めるために何らかの政治スキャンダルを提起しようとすることができる(Dobber et al, 2020: 2)。
0.77
‘The upcoming US presidential election in November 2020 will serve as a bellwether not only for Western liberal democracies, but for the rest of the world’ (Schick, 2020a: 21) and ‘I believe the corrupt information ecosystem will play an even greater role in the 2020 election than it did in 2016’ (Schick, 2020a: 113).
‘If executed and timed well enough, such interventions are bound to tip an outcome sooner or shadow of a cast illegitimacy over the election process itself’ (Citron & Chesnet, 2019, 1779).
‘‘Democracies will be common, Emerging Technologies Fellow Lindsay Gorman told panelists gathered online, on March 12 at the Information Technology Innovation Foundation’ (Patton, 2020).
‘We’ve seen proofs of concepts of deepfakes being released that could be used to influence the electorate.
ディープフェイク(ディープフェイク)の概念の証明は、選挙人に影響を与える可能性がある。
0.50
(…) If a deepfake is dropped at the right time, say maybe two or three days before an election occurs, imagine the impact that could have if it goes viral?’, stated Matt Price of ZeroFox (Roby, 2019).
This list of examples is intended to show that public warnings about the dangers of deepfakes have appeared since at issue united people with very different interests around a common goal.
1.2 Some previous cases ‘We haven’t seen any deepfakes released in the wild that we think are genuinely malicious, not saying that they’re deepfakes and trying to mask what they are’, said Price (Roby, 2019).
‘The military believed the video was a fake, although the president later confirmed it was real’ (Siyech, 2020).
軍部はビデオが偽物だと信じていたが、大統領は後に本物だと認めた」(Siyech, 2020)。
0.78
In India, a day before the Delhi election in February, two videos of Delhi unit Bharatiya Janata Party (BJP) President Manoj Tiwari hit the internet where he was found urging voters to vote to his party.
The videos were then reported as deepfakes (Kumar, 2020) In Italy, an Italian satirical television show used a deepfake video unfavourable to the prime insulting minister, Matteo Renzi.
fellow politicians. As the video the video was authentic, which led to public outrage’ (Buo, 2020).
政治家仲間。 ビデオが本物であるように、このビデオは大衆の怒りに繋がった(buo, 2020)。
0.72
However, there appears to be a disconnect between this succession of warnings and the events that subsequently transpired.
しかし、この一連の警告とその後に発生した出来事の間には、切り離されたように見える。
0.64
As Grossman (2020) expressed, ‘if there has been a surprise in campaign tactics this cycle, it is that these AI-generated videos have played a very minor role, little more than a cameo’.
2.0 What really happened in US Election 20 Collectively, there are four reasons that explain why malicious political deepfakes did not appear during the electoral period in the United States.
Over the course of at least during the year before the elections, thousands of deepfakes were disseminated; although most were of a pornographic nature, hundreds of videos that were seemingly benign in scope (e g , humorous, artistic, etc.)
were also released. In other words, not only did deepfake technology continue to develop, but hundreds of creators5 around the world continued to produce this type of content.
on ‘The social media shared individuals spread online, many
ソーシャルメディア共有の個人はオンラインで拡散した
0.75
him to believe depicted started
信じるために 描写 開始
0.62
video 2.1 Interventions from social networks and technological platforms
ビデオ 2.1 ソーシャルネットワークと技術プラットフォームからの介入
0.79
After receiving significant criticism for their passivity in the 2016 US elections, social networks (and technological platforms more generally) changed their discourse from 2018 onwards.
They moved away from maintaining a generally neutral attitude towards the content and towards a more active, interventionist, and even controversial form of censorship.
‘If they do not avail themselves of this opportunity—and if deepfakes rampage through next year’s election, their wake—Congress may well move to strip the platforms of the near total immunity they have enjoyed for a quarter of a century, and the courts may rethink interpretations of the First Amendment from democratic processes’ (Chesney and Citron, apud Galston, 2020).
もし彼らがこの機会を利用できなければ—そしてもし来年の選挙で大混乱を乗り越えれば—Congressは4世紀半にわたって楽しんだほぼ完全な免疫のプラットフォームを排除し、裁判所は民主的プロセスから修正第1条の解釈を再考するかもしれない(Chesney and Citron, apud Galston, 2020)。
0.71
falsehoods and misrepresentations fundamental lawmakers
虚偽と誤表現 基本 議員
0.63
protecting prevent
保護 prevent ~
0.71
that leaving a swathe of
あれ aを去る swathe (複数形 swathes)
0.56
in 5 https://www.technolo gyreview.com/2019/09 /25/132884/google-ha s-released-a-giant-d atabase-ofdeepfakes- to-help-fight-deepfa kes/
(…) they don’t really have an incentive to go around and try to take down this synthetic content or even note that it is synthetic content’ (Roby, 2019).
And the chief executive officer of Alphabet and Google, Sundar Pichai (2020), underlines: ‘Now there is no question in my mind that artificial intelligence needs to be regulated.
Some of the most significant moves of social networks were as follows:
ソーシャルネットワークの最も重要な動きは以下のとおりである。
0.72
deepfake content
Deepfake 内容
0.75
certain - - - -
確か - - - -
0.79
In January 2020, Facebook announced that they would begin removing content that ‘has been edited or synthesized (…) in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say’ (Hwang, 2020: 14); ‘Facebook claims A.I.
Twitter adopted a broader approach in February 2020, announcing that they would remove and warn users against ‘synthetic or manipulated media that are likely to cause harm’ (Hwang, 2020: 14).
A Twitter blog post last week rounding up its election efforts said ‘it had added labels warning of misleading content to 300,000 tweets since October 27, which was 0.2 percent of all election-related posts in that period.
‘These efforts will likely play a significant role technical smoothing research pushing both advancements solutions into practical by intermediaries across the web’ (Hwang, 2020: 15).
However, others did not, which led Camille François to remark that scrutiny, many platforms who ‘while consider alternative platforms and other designed to host content moderated away from the main platforms carry the risk of creating an entire alternative ecosystem where disinformation and hate can thrive’ (TSPAF, 2020).
Seeking technological solutions to combat deepfakes
ディープフェイク対策の技術的解決策を探る
0.52
To combat fake news, social networks resorted to fact-checking.
偽ニュースと戦うために、ソーシャルネットワークは事実チェックに頼った。
0.60
However, eliminating malicious deepfakes requires – and will increasingly require – the use of AI tools.
しかし、悪意のあるディープフェイクを削除するには、AIツールの使用がますます必要になる。
0.56
Research by Ahmed (2020), Farid and Agarwal (Manke, 2019), and Thaw et al (2020) demonstrated that ordinary people had difficulty identifying videos made with deepfake technology.
ahmed (2020), farid and agarwal (manke, 2019), thaw et al (2020) による研究は、一般人がdeepfake技術で作ったビデオを特定するのに苦労していることを示した。
0.75
There have been countless efforts to develop effective detection systems (Goled, 2020), but ‘have yet to establish a foolproof method’ (Grossmann, 2020).
‘Factors such as the need to avoid attribution, the time needed to train a Machine Learning model, and the availability of data will constrain how sophisticated actors use tailored deepfakes in practice’, suggested Hwang (2020: iii).
帰属を避ける必要、マシンラーニングモデルのトレーニングに必要な時間、データの提供といった要素は、高度なアクターが実際に仕組まれたディープフェイクをどのように使用するかに制約がある、とhwang氏は主張する(2020年3月)。 訳抜け防止モード: 帰属を避ける必要や機械学習モデルのトレーニングに必要な時間などの要因。 そしてデータの入手は、アクターが実際にどのようにカスタマイズされたディープフェイクを使用するかを制限する。 Hwang (2020 : iii )。
0.65
Furthermore, Du et al (2020) recognized that ‘although lots of efforts have been devoted to detect deepfakes, their performance drops significantly on previously
さらに、du et al (2020) は「ディープフェイクの検出に多くの努力が費やされてきたが、パフォーマンスは以前より大幅に低下している。
0.60
英語(論文から抽出)
日本語訳
スコア
unseen but related manipulations and the detection generalization capability remains a problem’.
目に見えないが関連する操作と検出一般化能力は依然として問題である。
0.55
‘It’s a good thing that the 2020 election wasn’t swarmed by deepfakes, because attempts to automatically detect AI-generated successful.
Efforts were made to ensure that the laws immediately had a federal dimension, but concrete results actually manifested in two states: California and Texas.
California’s case is more relevant, not only due to its stature as the largest state in the United States (and home to many technology companies), but also because two laws were passed that came into effect in early 2020.
The first law (AB 602) was designed to combat pornographic deepfakes, whilst the second law (AB 730) prohibited the use of deepfakes to influence political campaigns.
AB 730 prohibits the distribution of materially deceptive audio or visual media that depicts a candidate for office within 60 days of an election ‘with actual malice’, or the intent to injure the candidate’s reputation or deceive voters into voting for or against the candidate.
ab 730は、選挙の60日以内に大統領候補を‘実際の悪意’で描写する物的欺きのオーディオやビジュアルメディアの配布を禁止し、候補者の評判を損ねたり、有権者を騙して候補者に投票したり、反対したりする意図を禁止している。
0.66
‘Significantly, this measure exempts print and online media and websites if that entity clearly discloses that the deepfake video or audio file is inaccurate or of questionable authenticity’ (Halm et al, 2019).
deepfakeのビデオやオーディオファイルが不正確であるか、あるいは疑わしい本物であると明確に公表している場合、印刷やオンラインメディアやwebサイトを除外する(halm et al, 2019)。
0.75
Along the way, efforts were made to to create legislation by lawmakers such as Ben Sasse, who introduced what was viewed as the first bill to criminalize the malicious creation and distribution of deepfakes.
‘This bill would make it a crime to create and distribute a deepfake without including a digital marker of the modification and text statement acknowledging the modification’ (Galston, 2020).
On 20 December 2019, President Donald Trump signed the United States’ law related to deepfakes.
2019年12月20日、トランプ米大統領はディープフェイクに関する米国法に署名した。
0.68
This legislation was part of the National Defense Authorization Act (NDAA) for Fiscal Year 2020.
この法律は、2020年度の国防認可法(NDAA)の一部であった。
0.72
In two provisions related to this emerging technology, the NDAA (1) requires on the foreign report requires a comprehensive the of to government targeting US foreign Congress notify elections; and (3) establishes a ‘Deepfakes Prize’ competition to encourage the research or commercialization of deepfake-detection technologies,’ as explained by Haleet al (2019).
These measures resulted in the Deepfakes Report Act of 2019.6 Although it is not legislation, we can associate the webpage ‘Rumor vs. Reality’7 sponsored by the Cybersecurity and is part of the Department of Homeland Security, which continued to be active after November 3.
これらの措置は、2019.6年のディープフェイクス報告書法(Deepfakes Report Act of 2019.6)となったが、法律ではないが、サイバーセキュリティが後援するWebページ‘Rumor vs. Reality’7を関連づけることができる。
0.61
Or the two programs created by the Defense Advanced Research Projects Agency, improve
あるいは防衛高等研究計画局が作成した2つのプログラムを改良する。
0.77
Infrastructure Security Agency to this topic.
インフラストラクチャ・セキュリティ・エージェンシーからこのトピックへ。
0.61
The agency weaponization deepfake-disinformat ion
代理店 兵器化 ディープフェイク情報
0.50
(2) activities intended to
(2)活動 意図して
0.73
deepfakes; federal
ディープフェイク 連邦
0.46
first of 6 https://www.govtrack .us/congress/bills/1 16/hr3600 7 https://www.cisa.gov /rumorcontrol
defenses against adversary information operations’, states the Congressional Research Service (Neus, 2020).
敵対的情報運用に対する防衛は、議会調査局(neus, 2020)が定める。
0.69
2.3 ‘Why take the risk?‘
2.3 ‘なぜリスクを取るのか?
0.77
be significant
Be significant
0.77
expense in
expense~ イン
0.65
avoid training depicts, at
避ける 訓練 絵や に
0.67
least not how to
せめて どうやって へ
0.57
there may Of the four reasons presented in this paper, there is one that, at least in part, does not result from such a concerted but unorganized effort to prevent deepfakes: ‘depending on what the deepfake the acquiring data, structuring it properly, and running the training process’ (Hwang, 2020: 3).
Whilst some who bet on disinformation as a tactic in electoral interference were intimidated by these efforts (either because they feared the consequences Of being caught or that their work would be halted at the source, namely on social networks), others understood that resorting to deepfake technology was not necessary or even possible.
In other words, because ‘factors such as the need to avoid attribution, the time needed to train a Machine Learning model, and the availability of data will constrain how sophisticated actors use tailored deepfakes in practice’ (Hwang, 2020: iv) or just because ‘people seem to be much more taken with pornographic possibilities than bringing down Governments’ (Boyd, 2020), the truth likely to attract attention from automated filters, while conventional film editing and obvious lies won’t.
Although this may be the least relevant factor, it may be appropriate to highlight what appears to be a greater social awareness of disinformation, especially compared to the period before the 2016 US elections.
The efforts of the Knight Foundation, a freedom of speech advocate, are also worth noting.
言論の自由擁護団体であるナイト財団の努力も注目に値する。
0.50
The organization awarded $50 million in grants in 2019 to 11 universities and research institutions in the United States to study how technology is transforming democracy (Brunner, 2019).
One of the curious observations about deepfakes affirmation process is that, at different times and in different countries, deepfake technology has been used to denounce the dangers of deepfake technology including the classic deepfake produced in 2018 by comedian Jordan Peele, in which former US President Barack Obama calls Trump as a ‘dipshit’8; the two videos commissioned by British think tank Future Advocacy, in which British Prime Minister Boris Johnson endorses his opponent, Jeremy Corbyn; and another
One of the curious observations about deepfakes affirmation process is that, at different times and in different countries, deepfake technology has been used to denounce the dangers of deepfake technology including the classic deepfake produced in 2018 by comedian Jordan Peele, in which former US President Barack Obama calls Trump as a ‘dipshit’8; the two videos commissioned by British think tank Future Advocacy, in which British Prime Minister Boris Johnson endorses his opponent, Jeremy Corbyn; and another
0.85
itself. There are many examples of this,
それ自身 これには多くの例がある。
0.73
8 https://www.youtube. com/watch?v=bE1KWpoX9Hk
8 https://www.youtube. com/watch?v=bE1KWpoX9Hk
0.39
英語(論文から抽出)
日本語訳
スコア
committee and the
委員会 そして はあ?
0.59
Government in American history’, claimed the Election executive Election
政府 アメリカの歴史において,選挙執行役選挙という選挙権
0.73
video in which Corbyn endorses Johnson9; or a pair of deepfake advertisements – perhaps the first serious use of deepfakes in the US political campaign – released by nonpartisan advocacy group RepresentUs featuring Russian President Vladimir Putin and North Korean leader Kim Jong-un, both disseminating the same message: they do not need to interfere with the US elections, because the United States was capable of damaging its democracy on its own.10 ‘Some have even been used as part of a public service campaign to express the importance of saving democracy’ (Grossmann, 2020).
video in which Corbyn endorses Johnson9; or a pair of deepfake advertisements – perhaps the first serious use of deepfakes in the US political campaign – released by nonpartisan advocacy group RepresentUs featuring Russian President Vladimir Putin and North Korean leader Kim Jong-un, both disseminating the same message: they do not need to interfere with the US elections, because the United States was capable of damaging its democracy on its own.10 ‘Some have even been used as part of a public service campaign to express the importance of saving democracy’ (Grossmann, 2020).
0.89
Lastly, ‘the fact-checking and journalism ecosystem did better with 2020 disinformation than many had feared after the distortions of 2016’ (Simonite, 2020) (Chaturvedi, 2020) or (Spencer, 2020).
最後に、“ファクトチェックとジャーナリズムのエコシステムは、2016年の歪み(simonite, 2020)や(chaturvedi, 2020)、あるいは(spencer, 2020)以降、多くの人々が恐れていたような、2020年の偽情報よりも優れています。 訳抜け防止モード: 最後に、”the fact - check and journalism ecosystem did better with 2020 disinformation than many had afraids after the strains of 2016” (simonite, 2020) (chaturvedi, 2020)。 あるいは(スペンサー、2020年)。
0.87
3.0 Conclusion ‘The November 3rd election was the most secure Infrastructure Council Coordinating Infrastructure Sector Coordinating Council11.
The question of this paper is not about disinformation in general, since fake news continued to proliferate in the months prior to the elections (Abbasi and Derakhti, 2020; Schick, 2020a: 75).
Rather, it is about malicious political deepfakes.
むしろ、悪質な政治的ディープフェイクだ。
0.63
They appear to have played a minor role in US Elections 2020, which was indicated by the fact that the most-discussed topic in the months before 3 November was a faked video showing former Vice President Joe Biden sticking his tongue out, which was tweeted out by the president himsel (Mak and Temple-Raston, 2020).
A Investigations detailing the levels, was a dossier at Typhoon more relevant case, on several hypothetical business Joe connections Biden, Biden’s Eventually, journalists denounced the report as being authored by a fake person, ‘the spurious photograph of whom is a realistic avatar created by AI.
The 64-page ‘Martin Aspen’ dossier is a fraudulent project, fraught with slimy intrigue throughout’ (Cunningham, 2020) of From hypersensitivity, which to desinformations and proliferation of deepfakes but also to intervene, preventing it – a measure that they had not taken until the moment when the elections were approaching – and to the creation of laws to prevent what really had not yet happened.
Apparently so, but it was this exaggeration that enabled the creation of a climate that helped to nullify the putative effect of deepfakes.
どうやらこの誇張は、ディープフェイクの影響を無効化するのに役立った気候の創造を可能にしたのである。
0.57
It was a combination of various factors, as if we faced a (positive) ‘perfect storm’, that made the 2020 elections in the United States the ‘most secure’.
Some of these factors may have been more relevant than others, as noted by Chesney and Citron, who believed that the content screening and removal policies of social media platforms may have proven to be ‘the most salient response mechanism of all’, as their terms of service agreements are ‘the single most important documents governing digital speech in today’s world’ (Galston, 2020).
However, we followed Paul Barrett, the author of a New York University 2019 report that listed deepfakes first on a list of disinformation predictions for 2020.12 He wrote that ‘the warnings may have worked, convincing would-be deepfake producers that their clips would be quickly unmasked.
But he warns In addition, Paris and Donovan argued that an exclusively technological approach insufficient for addressing the threats of deep and cheap fakes13.
According to them, any solution must take into account both ‘the history of evidence’ (2019:47).
彼らによると、いかなる解決策も「証拠の歴史」の両方を考慮する必要がある(2019:47)。
0.70
Furthermore, Price stated that ‘there’s a number of different ways that this problem can be tackled, and I don’t think anyone by itself is a solution’ (Roby, 2019).
We agree with Grossmann there have not been more reason politically motivated malevolent deepfakes designed to stoke oppression, division, and violence is a matter of conjecture’.
The abovementioned combination of factors resulted in this specific context, but there is no guarantee that the same will be true of the future, in other elections.
The public and widespread perception in the United States of the role that disinformation played in this widespread the perception is unlikely to exist, since ‘across most of the world there is minimal commitment to taking on malicious deepfakes’ (Lamb, 2020).
- Given the challenging nature of policing deepfakes and the fact that they are ‘also the most sophisticated tool of misinformation and disinformation that has existed to date’ (Carruthers, 2020), the following list of recommendations is aimed at helping to address this problem in the future: -
Using available systems in the absence of a universal detection and blocking system.
普遍的な検出およびブロッキングシステムがない場合に利用可能なシステムを使用する。
0.75
This would allow to improve the final quality of the best detection software.
これにより、最高の検出ソフトウェアの最終品質が向上する。
0.72
‘As the deepfake technology approaches towards generating fake content with considerably improved quality, it will likely become impossible to detect them shortly’ (Katarya and Lal, 2020).
ディープフェイク技術は、品質が大幅に向上した偽コンテンツの生成にアプローチしているので、すぐに検出することは不可能になるだろう(katarya and lal, 2020)。
0.63
Continuing to invest in increasingly effective detection and blocking systems.
ますます効果的な検出およびブロックシステムへの投資を継続する。
0.68
This will also depend on the level of funding that universities and startups can secure.
これはまた、大学やスタートアップが確保できる資金のレベルにも依存する。
0.74
As stated by Nasir Memon, a professor of computer science at NYU, ‘there’s no money to be made out of detecting [fake media], which means that few people are motivated to research ways to detect deepfakes’ (Redick, 2020).
‘Also of particular concern is the use of deep fakes in propaganda and misinformation in regions with fragile governance and underlying ethnic tensions.
This ‘new security measures consistently catch many deepfake images and videos, people may be lulled into a false sense of security and believe we have the
situation under control’, said Professor Bart Kosko of the Ming Hsieh Department of Electrical and Computer Engineering; University of South California (Paul, 2020).
南カリフォルニア大学 (Paul, 2020) のMing Hsieh Department of Electrical and Computer EngineeringのBart Kosko教授は言う。
0.56
This must avoid at all costs.
これはあらゆるコストで避けなければならない。
0.62
In addition, as Schick (2020) anticipated, ‘the question, then, should not be ‘(When) will political visual can mitigate disinformation is already reshaping our political reality?‘
The writing of this paper began shortly after 3 November, the US Election day, and concluded less than three months is information about the situation bound to emerge at a later date.
Although it was stated in this paper that there was no knowledge of malicious political deepfakes during the two years prior to the 2020 elections in the United States, it is important to note that several deepfakes were very likely stopped by social network detection algorithms before they could be released and shared – that is, they would have existed if not for these systems, but they were pre-emptively identified and the public never learned of their existence.
4.0 References Abbasi, A., and Derakhti, A., ‘An Exploratory Study on Disinformation and Fake News Associated with the U.S. 2020 Presidential Election’, 2020 shares S., Ahmed, ‘Who cognitive network ability, [https://doi.org/10.1 016/j.tele.2020.1015 08] AMEinfo, How [https://www.ameinfo. com/industry/digital -and-media/proof-thr eshold-exploring-how americans-perceive-d eepfakes] Boyd.
4.0 Abbasi, A. and Derakhti, A., ‘An Exploratory Study on Disinformation and Fake News Associated with the U.S. 2020 Presidential Election’, 2020 share, S., Ahmed, ‘Who Cognitive Network ability, [https://doi.org/10.1 016/j.tele.2020.1015 08] AMEinfo, How [https://www.ameinfo. com/industry/digital -and-media/proof-thr eshold-exploring-how americans-perceive-d eepfakes] Boyd.
0.64
C. in action?’, 2020/10/16 [https://blog.malware bytes.com/cybercrime /2020/10/deepfakes-a nd-the-2020-united-s tateselection-missin g-in-action/] Brandom, 2019/03/05 ‘Deepfake [https://www.theverge .com/2019/3/5/182517 36/deepfake-propagan da-misinformation-tr ollvideo-hoax] Brunner, J., 2019/12/04 right-for-the-rest-o f-the-country-how-uw -and-wsu-plan-to-fig ht-the-digital-deepf akes/] Buo, S. A., ‘2020 The Emerging Threats of Deepfake Attacks and Countermeasures’, 2020/12 [http://dx.doi.org/10 .13140/RG.2.2.23089. 81762] Carruthers, B., 2020/10/26 media--and-what-it-m eans-for-marketers/3 889] Chaturvedi, A., ‘US elections 2020: Facebook India fact checker NewsMobile and UC Berkeley startup 2020/09/30
C. in action?’, 2020/10/16 [https://blog.malware bytes.com/cybercrime /2020/10/deepfakes-a nd-the-2020-united-s tateselection-missin g-in-action/] Brandom, 2019/03/05 ‘Deepfake [https://www.theverge .com/2019/3/5/182517 36/deepfake-propagan da-misinformation-tr ollvideo-hoax] Brunner, J., 2019/12/04 right-for-the-rest-o f-the-country-how-uw -and-wsu-plan-to-fig ht-the-digital-deepf akes/] Buo, S. A., ‘2020 The Emerging Threats of Deepfake Attacks and Countermeasures’, 2020/12 [http://dx.doi.org/10 .13140/RG.2.2.23089. 81762] Carruthers, B., 2020/10/26 media--and-what-it-m eans-for-marketers/3 889] Chaturvedi, A., ‘US elections 2020: Facebook India fact checker NewsMobile and UC Berkeley startup 2020/09/30
0.43
The for marketers’, [https://www.warc.com /newsandopinion/opin ion/the-coming-age-o f-synthetic-
the for marketers', [https://www.warc.com /newsandopinion/opin ion/the-coming-age-o f-synthetic-
Journal [https://economictime s.indiatimes.com/tec h/internet/us-electi ons-2020-facebook-in dia-factchecker-news mobile-and-uc-berkel ey-startup-fakenetai -to-detect-deep-fake s-on-socialmedia/art icleshow/78386921.cm s?utm_source=contentofinterest&am p;utm_medium=text&utm_c ampaign=cppst] Citron, D. K., and Chesney, R., ‘Deep Fakes: A Looming Challenge for Privacy, Democracy, and National 1753 [https://scholarship. law.bu.edu/faculty_s cholarship/640] Cunningham, P., ‘Fake faces and fake news: ‘Martin Aspen’ dossier on Hunter Biden’s China connection’, 2020/10/30 [http://jinpeili.blog spot.com/2020/10/fak e-faces-and-bald-fac ed-liesbehind.html] Dobber, T., Metoui, N., Trilling, D., Helberger, N., and de Vreese, C., Deepfakes 2020, Press/Politics [https://doi.org/10.1 177/1940161220944364 ] Du, M., Pentyala, S. K., Li, Y., and Hu, X., ‘Towards Generalizable Deepfake Detection with Locality-aware AutoEncoder’, 2020/10, CIKM ‘20: Proceedings of the 29th ACM International Conference Management Information [https://doi.org/10.1 145/3340531.3411892] Fernandez, 2020/09/30 detecting [https://www.axios.co m/deepfakes-technolo gy-misinformation-pr oblem-71bb7f2b-5dc2- 4fbd9b56-01ad430c1a4 e.html] Galtson, W. A., ‘Is seeing still believing?
日誌 [https://economictime s.indiatimes.com/tec h/internet/us-electi ons-2020-facebook-in dia-factchecker-news mobile-and-uc-berkel ey-startup-fakenetai -to-detect-deep-fake s-on-socialmedia/art icleshow/78386921.cm s?utm_source=contentofinterest&am p;utm_medium=text&utm_c ampaign=cppst] Citron, D. K., and Chesney, R., ‘Deep Fakes: A Looming Challenge for Privacy, Democracy, and National 1753 [https://scholarship. law.bu.edu/faculty_s cholarship/640] Cunningham, P., ‘Fake faces and fake news: ‘Martin Aspen’ dossier on Hunter Biden’s China connection’, 2020/10/30 [http://jinpeili.blog spot.com/2020/10/fak e-faces-and-bald-fac ed-liesbehind.html] Dobber, T., Metoui, N., Trilling, D., Helberger, N., and de Vreese, C., Deepfakes 2020, Press/Politics [https://doi.org/10.1 177/1940161220944364 ] Du, M., Pentyala, S. K., Li, Y., and Hu, X., ‘Towards Generalizable Deepfake Detection with Locality-aware AutoEncoder’, 2020/10, CIKM ‘20: Proceedings of the 29th ACM International Conference Management Information [https://doi.org/10.1 145/3340531.3411892] Fernandez, 2020/09/30 detecting [https://www.axios.co m/deepfakes-technolo gy-misinformation-pr oblem-71bb7f2b-5dc2- 4fbd9b56-01ad430c1a4 e.html] Galtson, W. A., ‘Is seeing still believing?
0.63
The deepfake challenge to truth in politics’, 2020/01/08 [https://www.brooking s.edu/research/is-se eing-still-believing -the-deepfake-challe nge-totruth-in-polit ics/] Goled, ‘Top [https://analyticsind iamag.com/top-ai-bas ed-tools-techniques- for-deepfake-detecti on/] Grossman, G., ‘Deepfakes may not have upended the 2020 U.S. election, but their day is coming’, 2020,11/01 [https://venturebeat. com/2020/11/01/deepf akes-may-not-have-up ended-the2020-u-s-el ection-but-their-day -is-coming/] Hale, W., Chipman, J., Ferraro, M. and Preston, S., ‘First Federal Legislation on Deepfakes Signed Into [https://www.jdsupra. com/legalnews/first- federal-legislation- ondeepfakes-42346/] Halm, K.C., Doran, A.K., Segal, J. and Kalinowski IV, Caeser, ‘Two New California Laws Tackle Deepfake 2019/10/14 [https://www.dwt.com/ insights/2019/10/cal ifornia-deepfakes-la w] Hwang, T., Emerging assessment/] Katarya, R., and Lal, A., ‘A Study on Combating Emerging Threat of Deepfake Weaponization’, 2020/10 [https://ieeexplore.i eee.org/document/924 3588] Kumar, rapidly’, is [https://www.analytic sinsight.net/landsca pe-deepfake-content- escalating-rapidly/]
The deepfake challenge to truth in politics’, 2020/01/08 [https://www.brooking s.edu/research/is-se eing-still-believing -the-deepfake-challe nge-totruth-in-polit ics/] Goled, ‘Top [https://analyticsind iamag.com/top-ai-bas ed-tools-techniques- for-deepfake-detecti on/] Grossman, G., ‘Deepfakes may not have upended the 2020 U.S. election, but their day is coming’, 2020,11/01 [https://venturebeat. com/2020/11/01/deepf akes-may-not-have-up ended-the2020-u-s-el ection-but-their-day -is-coming/] Hale, W., Chipman, J., Ferraro, M. and Preston, S., ‘First Federal Legislation on Deepfakes Signed Into [https://www.jdsupra. com/legalnews/first- federal-legislation- ondeepfakes-42346/] Halm, K.C., Doran, A.K., Segal, J. and Kalinowski IV, Caeser, ‘Two New California Laws Tackle Deepfake 2019/10/14 [https://www.dwt.com/ insights/2019/10/cal ifornia-deepfakes-la w] Hwang, T., Emerging assessment/] Katarya, R., and Lal, A., ‘A Study on Combating Emerging Threat of Deepfake Weaponization’, 2020/10 [https://ieeexplore.i eee.org/document/924 3588] Kumar, rapidly’, is [https://www.analytic sinsight.net/landsca pe-deepfake-content- escalating-rapidly/] 訳抜け防止モード: 2020/01/08[https://www.brooking s.edu/research/is-se eing- Still-believing-the- deepfake-challenge-t otruth-in-politics/] トップ [ https://analyticsind iamag.com/top-ai-bas ed-tools-techniques- for-deepfake-detecti on/] Grossman, G。 しかし、2020,11/01[https://venturebeat. com/2020/11/01/deepf akes-may-not-have-up ended-the 2020-u-s-election- but-their-day-is-com ing/] W., Chipman, J., Ferraro, M. and Preston, S., ‘ First Federal Legislation on Deepfakes Signed Into [ https://www.jdsupra. com/legalnews/first- federal-legislation- ondeepfakes-42346/ ] Halm, K.C.、ドーラン、A.K.、セガル、J.、カリノフスキー4世。 Caeser, ‘ Two New California Laws Tackle Deepfake 2019/10/14 [ https://www.dwt.com/ insights/2019/10/cal ifornia-deepfakes-la w ] Hwang, T., Emerging Assessment/ ] Katarya, R., and Lal, A. 「ディープフェイクウェポン化の新興脅威に関する研究」 https://www.analytic sinsight.net/landsca pe-deepfake-content- escalating-rapidly/]
0.63
for Security and [https://cset.georget own.edu/research/dee pfakes-a-grounded-th reat-
‘Deepfakes, A Grounded Threat Assessment’, 2020/07, Center Technology
2020/07 センター技術'Deepfakes, a Grounded Threat Assessment'
0.80
impossible’, 2019/12/24
不可能」。 2019/12/24
0.57
2020/09/15
2020/09/15
0.39
2020/11/10
2020/11/10
0.39
Techniques of deepfake
技術 ですから Deepfake
0.67
content escalating so
内容 エスカレート だから
0.61
V., ‘The landscape
V。 「The」 風景
0.72
For Deepfake
のために ディープフェイク
0.45
AI-Based Tools &
AIベース 道具 &
0.71
Detection’,
Detection’,
0.85
deepfakes almost
Deepfakes ほとんど
0.76
Videos in Law’,
ビデオ イン 法律』。
0.64
Politics and Porn’,
政治 そして Porn’,
0.77
英語(論文から抽出)
日本語訳
スコア
School 2020/10/12
学校 2020/10/12
0.59
‘Researchers From the
「研究者」 来歴 はあ?
0.47
and Engineering Use Facial Quirks
工学的利用は 顔のクイック
0.62
‘Where Are The Deepfakes
ディープフェイクはどこ?
0.26
In This Presidential Election?’, [https://www.npr.org/ 2020/10/01/918223033 /where-are-the-deepf akes-in-this-
This Presidential Election?’, [https://www.npr.org/ 2020/10/01/918223033 /where-are-the-deepf akes-in-This]
0.37
Lamb, H., ‘Sex, coups, and the liar’s dividend: what are deepfakes doing to us?’, 2020/04/08 [https://eandt.theiet .org/content/article s/2020/04/sex-coups- and-the-liar-s-divid end-whatare-deepfake s-doing-to-us/] Mak, T. and Temple-Raston, T., 2020/10/01 presidential-electio n?t=1609249908361] Manke, K., I to Unmask ‘Deepfakes’’, 2019/06/18 [https://www.ischool. berkeley.edu/news/20 19/researchers-i-sch ooland-engineering-u se-facial-quirks-unm ask-deepfakes] Neus, E., ‘Election Results Remained Secure Under Barrage of Disinformation, Altered Video’, 2020/11/18 [https://fedtechmagaz ine.com/article/2020 /11/election-results -remained-secureunde r-barrage-disinforma tion-altered-video] Panetta, F., Amer, P., and Harrell, D. F., ‘We made a realistic deepfake, and here’s why we’re worried’, [https://www.bostongl obe.com/2020/10/12/o pinion/july-we-relea seddeepfake-mit-with in-weeks-it-reached- nearly-million-peopl e/] Paris, B. and Donovan, J., ‘Deepfakes and Cheap Fakes: The Manipulation of Audio and Visual Evidence’, 2019/09/18 [https://datasociety. net/library/deepfake s-and-cheap-fakes/] Patton, A., ‘Panelist at Information Technology and Innovation Foundation Event Say Deepfakes Are a Double-Edged Sword’, 2020/03/23 [http://broadbandbrea kfast.com/2020/03/pa nelist-atinformation -technology-and-inno vation-foundation-ev ent-say-deepfakes-ar e-a-double-edgedswor d/] Paul, Detectors’, [https://viterbischoo l.usc.edu/news/2020/ 05/fooling-deepfake- detectors/] Pichai, regulate [https://www.ft.com/c ontent/3467659a-386d -11ea-ac3c-f68c10993 b04] Polakovic, G., ‘From deepfakes to fake news, an array of influences aim to shape voter decisions’, 2020/09/23 [https://phys.org/new s/2020-09-deepfakes- fake-news-array-aim. html] Porup, 2019/04/10 risk’, [https://www.csoonlin e.com/article/329300 2/deepfake-videos-ho w-and-why-they-work. html] Puutio, A., Timis, A., ‘Deepfake democracy: Here’s how modern elections could be decided by fake [https://www.weforum. org/agenda/2020/10/d eepfake-democracycou ld-modern-elections- fall-prey-to-fiction /] Raj, Y., [https://www.jurist.o rg/commentary/2020/0 6/yash-raj-deepfakes /] Redick, N., ‘What the Rise of Deepfakes Means for the Future of Internet Policies’, 2020/11/19 [https://www.mironlin e.ca/what-the-rise-o f-deepfakes-means-fo r-the-future-of-inte rnetpolicies/] Roby, K., [https://www.techrepu blic.com/article/the -sinister-timing-of- deepfakes-and-the-20 20election/]
Lamb, H., ‘Sex, coups, and the liar’s dividend: what are deepfakes doing to us?’, 2020/04/08 [https://eandt.theiet .org/content/article s/2020/04/sex-coups- and-the-liar-s-divid end-whatare-deepfake s-doing-to-us/] Mak, T. and Temple-Raston, T., 2020/10/01 presidential-electio n?t=1609249908361] Manke, K., I to Unmask ‘Deepfakes’’, 2019/06/18 [https://www.ischool. berkeley.edu/news/20 19/researchers-i-sch ooland-engineering-u se-facial-quirks-unm ask-deepfakes] Neus, E., ‘Election Results Remained Secure Under Barrage of Disinformation, Altered Video’, 2020/11/18 [https://fedtechmagaz ine.com/article/2020 /11/election-results -remained-secureunde r-barrage-disinforma tion-altered-video] Panetta, F., Amer, P., and Harrell, D. F., ‘We made a realistic deepfake, and here’s why we’re worried’, [https://www.bostongl obe.com/2020/10/12/o pinion/july-we-relea seddeepfake-mit-with in-weeks-it-reached- nearly-million-peopl e/] Paris, B. and Donovan, J., ‘Deepfakes and Cheap Fakes: The Manipulation of Audio and Visual Evidence’, 2019/09/18 [https://datasociety. net/library/deepfake s-and-cheap-fakes/] Patton, A., ‘Panelist at Information Technology and Innovation Foundation Event Say Deepfakes Are a Double-Edged Sword’, 2020/03/23 [http://broadbandbrea kfast.com/2020/03/pa nelist-atinformation -technology-and-inno vation-foundation-ev ent-say-deepfakes-ar e-a-double-edgedswor d/] Paul, Detectors’, [https://viterbischoo l.usc.edu/news/2020/ 05/fooling-deepfake- detectors/] Pichai, regulate [https://www.ft.com/c ontent/3467659a-386d -11ea-ac3c-f68c10993 b04] Polakovic, G., ‘From deepfakes to fake news, an array of influences aim to shape voter decisions’, 2020/09/23 [https://phys.org/new s/2020-09-deepfakes- fake-news-array-aim. html] Porup, 2019/04/10 risk’, [https://www.csoonlin e.com/article/329300 2/deepfake-videos-ho w-and-why-they-work. html] Puutio, A., Timis, A., ‘Deepfake democracy: Here’s how modern elections could be decided by fake [https://www.weforum. org/agenda/2020/10/d eepfake-democracycou ld-modern-elections- fall-prey-to-fiction /] Raj, Y., [https://www.jurist.o rg/commentary/2020/0 6/yash-raj-deepfakes /] Redick, N., ‘What the Rise of Deepfakes Means for the Future of Internet Policies’, 2020/11/19 [https://www.mironlin e.ca/what-the-rise-o f-deepfakes-means-fo r-the-future-of-inte rnetpolicies/] Roby, K., [https://www.techrepu blic.com/article/the -sinister-timing-of- deepfakes-and-the-20 20election/]
Now Detects 94.7% of the Hate Speech That Gets Removed From Its Platform’, 2020/11/19 [https://www.nbcdfw.c om/news/business/mon ey-report/facebookcl aims-a-i-now-detects -94-7-of-the-hate-sp eech-that-gets-remov ed-from-itsplatform/ 2484885/] Simonite, ‘What [https://www.wired.co m/story/what-happene d-deepfake-threat-el ection/] Smith, deepfakes’, [https://www.aspi.org .au/report/weaponise d-deep-fakes] Spencer, [https://www.factchec k.org/2020/08/biden- video-deceptively-ed ited-to-make-him-app earlost/] Sprangler, T., ‘What will U.S. do to combat deepfakes?
Now Detects 94.7% of the Hate Speech That Gets Removed From Its Platform’, 2020/11/19 [https://www.nbcdfw.c om/news/business/mon ey-report/facebookcl aims-a-i-now-detects -94-7-of-the-hate-sp eech-that-gets-remov ed-from-itsplatform/ 2484885/] Simonite, ‘What [https://www.wired.co m/story/what-happene d-deepfake-threat-el ection/] Smith, deepfakes’, [https://www.aspi.org .au/report/weaponise d-deep-fakes] Spencer, [https://www.factchec k.org/2020/08/biden- video-deceptively-ed ited-to-make-him-app earlost/] Sprangler, T., ‘What will U.S. do to combat deepfakes?
0.41
Michigan members of Congress want to know’2019/12/24 [https://eu.freep.com /story/news/local/mi chigan/2019/12/24/de epfakesmichigan-memb ers-of-congress-addr ess-threat/267922200 1/] Siyech, M. S., ‘From US elections to violence in India, the threat of deepfakes is only growing’, 2020/09/29 [https://www.scmp.com /comment/opinion/art icle/3103331/us-elec tionsviolence-india- threat-deepfakes-onl y-growing] The Sciences Po American Foundation, [https://www.sciences po.fr/us-foundation/ node/805.html] Stacey, Threat [https://www.forbes.c om/sites/edstacey/20 19/10/28/can-startup s-solve-the-threat-o fdeepfakes/?sh=34abcac825c0] Thaw, N., July, T., Wai, A. N., Goh, D. H-L., and Chua, A.
Michigan members of Congress want to know’2019/12/24 [https://eu.freep.com /story/news/local/mi chigan/2019/12/24/de epfakesmichigan-memb ers-of-congress-addr ess-threat/267922200 1/] Siyech, M. S., ‘From US elections to violence in India, the threat of deepfakes is only growing’, 2020/09/29 [https://www.scmp.com /comment/opinion/art icle/3103331/us-elec tionsviolence-india- threat-deepfakes-onl y-growing] The Sciences Po American Foundation, [https://www.sciences po.fr/us-foundation/ node/805.html] Stacey, Threat [https://www.forbes.c om/sites/edstacey/20 19/10/28/can-startup s-solve-the-threat-o fdeepfakes/?sh=34abcac825c0] Thaw, N., July, T., Wai, A. N., Goh, D. H-L., and Chua, A.
0.48
Y.K., ‘Is it real?
Y.K.、それは本物か?
0.47
A study on detecting deepfake videos’, 2020/10/22, Proceedings of Information Science and Technology [https://doi.org/10.1 002/pra2.366] Toews, R., ‘Deepfakes Are Going To Wreak Havoc On Society.
ディープフェイクビデオの検出に関する研究 : 2020/10/22, Proceedings of Information Science and Technology [https://doi.org/10.1 002/pra2.366] Toews, R., ‘Deepfakes Are Going to Wreak Havoc On Society.
0.72
We Are Not Prepared’, 2020, 05/25 [https://www.forbes.c om/sites/robtoews/20 20/05/25/deepfakes-a re-going-to-wreak-ha vocon-society-we-are -not-prepared/?sh=9192fd874940] Waddell, into [https://www.axios.co m/deepfake-laws-fb5d e200-1bfe-4aaf-9c93- 19c0ba16d744.html] Wolfgang, B., ‘Putin developing fake videos to foment 2020 election chaos: ‘It’s going to destroy lives’’, [https://www.washingt ontimes.com/news/201 8/dec/2/vladimir-put insdeep-fakes-threat en-us-elections/] (All hyperlinks used in this work were verified and confirmed as valid to 2020/12/31)
We Are Not Prepared’, 2020, 05/25 [https://www.forbes.c om/sites/robtoews/20 20/05/25/deepfakes-a re-going-to-wreak-ha vocon-society-we-are -not-prepared/?sh=9192fd874940] Waddell, into [https://www.axios.co m/deepfake-laws-fb5d e200-1bfe-4aaf-9c93- 19c0ba16d744.html] Wolfgang, B., ‘Putin developing fake videos to foment 2020 election chaos: ‘It’s going to destroy lives’’, [https://www.washingt ontimes.com/news/201 8/dec/2/vladimir-put insdeep-fakes-threat en-us-elections/] (All hyperlinks used in this work were verified and confirmed as valid to 2020/12/31)
0.47
‘The 2020 election and disinformation’, 2020/12/03