Accountability, a requisite for responsible AI, can be facilitated through
transparency mechanisms such as audits and explainability. However, prior work
suggests that the success of these mechanisms may be limited to Global North
contexts; understanding the limitations of current interventions in varied
socio-political conditions is crucial to help policymakers facilitate wider
accountability. To do so, we examined the mediation of accountability in the
existing interactions between vulnerable users and a 'high-risk' AI system in a
Global South setting. We report on a qualitative study with 29
financially-stressed users of instant loan platforms in India. We found that
users experienced intense feelings of indebtedness for the 'boon' of instant
loans, and perceived huge obligations towards loan platforms. Users fulfilled
obligations by accepting harsh terms and conditions, over-sharing sensitive
data, and paying high fees to unknown and unverified lenders. Users
demonstrated a dependence on loan platforms by persisting with such behaviors
despite risks of harms such as abuse, recurring debts, discrimination, privacy
harms, and self-harm to them. Instead of being enraged with loan platforms,
users assumed responsibility for their negative experiences, thus releasing the
high-powered loan platforms from accountability obligations. We argue that
accountability is shaped by platform-user power relations, and urge caution to
policymakers in adopting a purely technical approach to fostering algorithmic
accountability. Instead, we call for situated interventions that enhance agency
of users, enable meaningful transparency, reconfigure designer-user relations,
and prompt a critical reflection in practitioners towards wider accountability.
We conclude with implications for responsibly deploying AI in FinTech
applications in India and beyond.
However, prior work suggests that the success of these mechanisms may be limited to Global North contexts; understanding the limitations of current interventions in varied socio-political conditions is crucial to help policymakers facilitate wider accountability.
To do so, we examined the mediation of accountability in the existing interactions between vulnerable users and a ‘high-risk’ AI system in a Global South setting.
We report on a qualitative study with 29 nancially-stressed users of instant loan platforms in India.
インドにおけるインスタントローンプラットフォームの利用者29名の質的研究について報告する。
0.67
We found that users experienced intense feelings of indebtedness for the ‘boon’ of instant loans, and perceived huge obligations towards loan platforms.
Users demonstrated a dependence on loan platforms by persisting with such behaviors despite risks of harms such as abuse, recurring debts, discrimination, privacy harms, and self-harm to them.
Instead of being enraged with loan platforms, users assumed responsibility for their negative experiences, thus releasing the high-powered loan platforms from accountability obligations.
We argue that accountability is shaped by platformuser power relations, and urge caution to policymakers in adopting a purely technical approach to fostering algorithmic accountability.
Instead, we call for situated interventions that enhance agency of users, enable meaningful transparency, recongure designer-user relations, and prompt a critical reection in practitioners towards wider accountability.
∗Work done during an internship at Google Research, India.
∗WorkはインドのGoogle Researchでインターンシップ中に行われた。
0.72
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for prot or commercial advantage and that copies bear this notice and the full citation on the rst page.
2022. How Platform-User Power Relations Shape Algorithmic Accountability: A Case Study of Instant Loan Platforms and Financially Stressed Users in India.
https://doi.org/10.1 145/3531146.3533237 1 INTRODUCTION Accountability is necessary to ensure that articial intelligence (AI) is deployed responsibly, especially given the wide applicability of AI algorithms to several automated decision making contexts with ‘high stakes’ [40, 57, 58, 81].
While automated decision systems (ADS) [106] have the potential to make more ecient and fairer decisions than their human counterparts [45, 52], they could also produce harmful outcomes, worsening inequality in society [20, 23, 50, 54, 92, 94, 98].
Through accountability relationships, the actors responsible for harms caused by the ADS can be obligated to provide ‘accounts’ to the individuals who are harmed; the individuals or their representatives may then judge the accounts, and seek to impose consequences if necessary [121].
In this way, we can ensure that the use of ADS occurs in accordance with the interests of all stakeholders.
このようにして、すべての利害関係者の利益に応じて、ADSの使用が確実に起こります。
0.56
Facilitating organizational and technical transparency could reduce distrust among stakeholders, and enhance accountability relationships [10, 53, 90, 112].
Perceived agency of stakeholders [69, 81], their education levels [43], and their optimism in AI [73], could complicate the rhetoric of ‘stakeholder distrust in ADS.’ Further, the ecacy of transparency mechanisms towards accountability depends on the presence of a critically-aware public, legislative support, watchdog journalism, and the responsiveness of technology providers [17, 60, 76].
Understanding the limitations of current approaches in varied socio-political conditions is crucial to help policymakers adopt context-appropriate interventions, and ensure wider accountability.
Prior work has sought to ease the burden on technology providers towards fullling transparency obligations [89, 102], and studied their impacts on users aected by ADS [42, 113, 122].
To ll this gap, we examined how algorithmic accountability is mediated in existing interactions between vulnerable users and a ‘high-risk’ ADS in a Global South setting; one where there is weak legislation and nation-wide high optimism for AI.
We conducted a qualitative study with nancially stressed low and middle income users of instant loan platforms in India.
インドにおけるインスタントローンプラットフォームの低所得層と中所得層を対象に質的研究を行った。
0.68
These platforms target ‘thin-le’ borrowers (i.e., users ineligible to oerings from formal nancial services) with various small credit oerings, often in the range of INR 500 - INR 100,000 (USD 7 - USD 1500).
Instant loan platforms have risen to prominence in recent years through a combination of factors such as aordable smartphones [80], the state’s push for widespread digital adoption [97], promotion of nancial technology (FinTech) as the poster child of AI success in India [11], and nancial challenges to users brought by the COVID-19 pandemic [18].
Through semi-structured interviews with 29 users of instant loan platforms from low and middle income groups in India, we examined how nancially stressed users made meaning of their experiences with the ‘high-risk’ ADS, and how they perceived their relations to accountability.
We found that users were drawn to loan platforms due to the promises of immediate money, minimal verication, and long tenure periods, which were enabled by instantaneous and synchronous aspects of AI.
Users also perceived additional benets such as enhanced privacy and dignity, preserved social ties, and social mobility through the use of these platforms.
Users perceived and fullled several obligations towards lenders, even at the risks of undergoing abuse, discrimination, emotional and reputation harms, and self-harm from them.
Yet, instead of being enraged with loan platforms, users shared responsibility for their negative experiences.
しかし、ローンプラットフォームに激怒する代わりに、ユーザーは否定的な体験に対する責任を共有した。
0.66
Through this work, we make the following contributions: First, we explore the relationship between ADS experiences of users, their social conditions and accountability.
In doing so, we build upon previous work in FAccT, and explore the social dimensions 2non-traditional nancial modeling data such as mobile phone and social media usage, nancial transactions, images and videos used to model risk
of accountability through a case study on loan platforms in the Global South.
グローバル・サウスの融資プラットフォームに関するケーススタディを通じて、説明責任を負っています。
0.48
We make an empirical contribution regarding how low-powered users perceive and demonstrate a dependence on the ‘high-risk’ ADS, holding themselves responsible for its failures; and how these user behaviors release high-powered actors from accountability obligations.
Next, situating our ndings in the literature on accountability, we argue that algorithmic accountability is mediated through platform-user power relations, and can be inhibited by socio-political realities of the context.
We urge caution to policymakers in adopting universal technical interventions to foster accountability, and instead propose situated [114] approaches towards achieving parity in platform-user power relations.
2 RELATED WORK In this section, we give a brief overview of the platform-user dynamics envisioned in literature on algorithmic accountability, the mechanisms designed to structure accountability relationships, and a glimpse into India’s AI landscape.
Accountability Technical and organizational opacity are viewed as primary barriers to fostering accountability [27, 98], suggesting the need for transparency from technology providers [35–37].
Technology providers have also taken steps towards increasing transparency due to a combination of user demands and self-imposed responsibility [19, 84, 117, 123].
For instance, researchers studying the experiences of ADS among users have reported increased distrust among users and their desire for transparency into ADS [25, 47], Uber drivers accused the company of deception due to its use of an opaque ADS and demanded more transparency [116], and Yelp users expressed the need for transparency into its recommendation algorithm [48].
Consequently, several regulatory policies mandate public access to information, hoping that aected users will use this information towards making accountability demands from technology providers [16, 32, 51, 59, 99].
In fact, such an approach has been extremely successful recently.
実際、このようなアプローチは、最近非常に成功した。
0.74
After audit trails of harmful facial recognition systems were made available to the public, there were widespread public campaigns that eventually led to their regulation in the US and UK [26].
Favorable outcomes to users from otherwise discriminatory ADS may impede their accountability eorts [48, 120], users may resist imputing moral responsibility to ADS [21], and notions of accountability could vary by users’ backgrounds [70].
Nation-states and users in the Global South view ADS aspirationally and deferentially [109]; where users may attribute far reaching capabilities to
グローバル・サウス・ビューの国と利用者は[109]という広告を熱心に宣伝している。
0.43
英語(論文から抽出)
日本語訳
スコア
How Platform-User Power Relations Shape Algorithmic Accountability
プラットフォーム・ユーザ・パワー・リレーションシップのアルゴリズム・アカウンタビリティ
0.50
FAccT ’22, June 21–24, 2022, Seoul, Republic of Korea
FAccT'22, 6月21-24, 2022, ソウル, 大韓民国
0.85
ADS, placing misguided trust in them [93]; and ADS may enjoy a legitimized power to inuence users’ actions, even with little or no evidence of their true capabilities [73].
Prior work calls for aligning transparency with user needs [42, 75].
ユーザの要求[42, 75]に透明性を合わせるための事前作業の呼び出し。
0.75
These ndings warrant a closer examination of the accountability dynamics in varied socio-political conditions; a gap that we seek to ll in this paper.
これらの議論は,社会・政治の諸条件における説明責任のダイナミクスのより深い考察を必要としている。
0.61
2.2 Mechanisms of Algorithmic Accountability A recent survey of algorithmic accountability policies in the public sector from 20 national and local governments found that transparency was the prime focus of policies [9].
Under a dynamic where users express skepticism, and seek to take action towards accountability, transparency (i.e., of models, datasets and practices surrounding the development of ADS) are viable mechanisms [53, 64, 89].
Mechanisms to increase transparency can be standalone such as documentation (i.e., of source-code, datasets, models, and processes surrounding the development of ADS) [21, 30, 55, 64, 83, 105] and explainable decisions (i.e., to help the users make informed choices when interacting with ADS) [10, 21, 42, 83, 100, 119]; or be embedded in other mechanisms such as algorithmic audits [26, 46, 112] and impact assessments [101, 102].
In fact, some studies with audits and explainability mechanisms have documented positive outcomes such as raising users’ critical awareness [100], increasing their desires to seek accountability from the designers of ADS [113], and inuencing technology providers to make changes in ADS [46, 101].
However, the ecacy of mechanisms in fostering accountability also relies on other factors such as a critically-aware public, legislative support, watchdog journalism, and the responsiveness of high-powered actors [17, 60, 76].
Raji and Buolamwini acknowledge the importance of consumer awareness and capitalistic competition in complementing their audit eorts in facial recognition regulation [101].
Unfortunately, such surrounding conditions for accountability are not universally available [109].
残念ながら、このような説明責任の条件は普遍的に利用できない[109]。
0.69
Organizationallevel changes from technology providers most often occur as a result of regulatory and user pressures [103], low-powered users may nd it challenging to regain their agency displaced by platforms [69, 81], and mechanisms may have limited ecacy where there is power asymmetry [17, 76].
This line of work calls for examining platform-user power relations for designing mechanisms.
この一連の作業では、設計メカニズムのプラットフォームユーザ間の電力関係を調べる必要がある。
0.48
2.3 AI Landscape in India India, a country with 1.38 billion where half the population is under the age of 25, is considered an emerging force in AI due to its growing information technology workforce, research in AI, investments and cloud-computing infrastructure [31].
India envisions AI as a force for socio-economic upliftment, which is seen through state-supported industry initiatives [49] and wide deployment of AI in surveillance [67], agriculture [34], and welfare processing systems [49].
However, such promotion is supported with weak legislation.
しかし、そのような昇進は弱い法律で支持されている。
0.48
The two national AI strategies i.e., the AI Task force report [115], and the NITI Aayog’s National Strategy for AI [91] are focused on increasing adoption of AI [115] or include prescriptive guidelines towards accountability with insucient enforcement mechanisms [91].
Similar recommendations are found in state-level policies on AI in India [56].
同様の勧告は、インドのAIに関する国家レベルの政策 [56] に見られる。
0.68
Policy-oriented research from the FAccT
FAccTによる政策指向研究
0.91
and HCI communities have pointed out how adopting accountability frameworks from the Global North may fail without due consideration to the local contexts where they are applied [71, 85, 109]; Sambasivan et al , have noted the dierences in axes discrimination and notions of fairness [109].
Kalyanakrishnan et al and Marda et al., documented the amplication of biases when using Western frameworks in the Indian context [71, 86].
kalyanakrishnan et al and marda et al., インドの文脈で西洋のフレームワークを使用する際のバイアスの回避を文書化している [71, 86]。
0.71
We contribute to this emerging line of work on AI policy and research agenda in India.
インドにおけるAI政策と研究アジェンダに関するこの新たな取り組みに貢献する。
0.70
3 METHODS 3.1 Interviews We conducted 29 semi-structured interviews with low and middle income individuals (16 men, 13 women) primarily from Karnataka and Tamil Nadu regions in India.
We recruited participants who had used instant loan apps from non-banking nancial companies between 6 months and 2 years of our study through DoWell Research agency and snowball sampling.
調査の6ヶ月から2年の間,非銀行企業からインスタントローンアプリを使用した参加者を,DoWell Research AgencyとSnowball sampleを通じて募集した。
0.72
We provided INR 1500 (USD 20) as incentives to our participants.
参加者へのインセンティブとして,INR 1500(USD 20)を提供した。
0.64
We sampled participants based on age, gender, prior experience using instant loan applications and success of loan approval.
We rst explained what AI meant to participants’ through examples of Youtube and Facebook, and then presented this scenario: Due to COVID, many people are in need of money but don’t have jobs, or access to PAN cards and bank accounts.
Some apps suggest using AI to make lending decisions.
一部のアプリは、融資決定にAIを使うことを推奨している。
0.48
Instead of bank details, they will look at users’ mobile phone information such as biometrics, location, call logs, nancial transactions and shopping apps used on devices, and users’ social media activity to make decisions on loan applications.
They believe that this approach will increase people’s access to loans.
彼らはこのアプローチが人々のローンへのアクセスを増加させると信じている。
0.64
What are your thoughts about this?
これについてどう思いますか。
0.76
3.2 Analysis We conducted reexive thematic analysis to analyze our data [24].
3.2 データ分析のために再帰的テーマ分析を行った [24]。
0.82
In the familiarization phase, the rst author listened to each audio recording at least once, and read each transcript at least twice, paying close attention to participants’ choice of phrases, especially in regional languages, their emotional reactions to questions, hesitations, pauses, and repetitions.
We recorded these observations and reections and shared them during weekly research meetings with the rest of the team, which then served as aids in coding the data.
In the coding phase, the lead author followed an open-coding
コーディングフェーズでは、リードライターはオープンコーディングに従います。
0.66
英語(論文から抽出)
日本語訳
スコア
FAccT ’22, June 21–24, 2022, Seoul, Republic of Korea
FAccT'22, 6月21-24, 2022, ソウル, 大韓民国
0.85
Ramesh et al approach rst, staying close to the data (i.e., needing money urgently, not telling friends the reason for money) [108], and iteratively revised the codes with the second author (i.e., ‘instant’ money, preserving privacy, feelings of indebtedness), resolving disagreements through discussion.
In this work, we present 3 themes that we generated from 11 stable codes: (1) Perceived Benets of Instant Loan Platforms, (2) Perceived Obligations to Instant Loan Platforms, and (3) Dependence on Instant Loan Platforms.
3.3 Ethical Considerations and Limitations We approached this topic with great care, knowing the dire circumstances of participants.
3.3 倫理的考察と限界 参加者の希少な事情を知り, 非常に注意してこの問題にアプローチした。
0.61
We reected carefully if this study was time-appropriate.
この研究が時間に合えば、私たちは慎重に再検討した。
0.49
Several participants were ecstatic to be part of our research to express their gratitude through our report towards loan companies.
何人かの参加者は、当社の融資会社に対する報告に感謝の意を表していただきました。
0.56
One participant requested extra time to share their experiences in depth.
ある参加者は、自身の経験を深く共有するために余分な時間を要求した。
0.53
These incidents helped us viewed our participants as individuals in their own right, rather than as victims of their circumstances, and gave us condence that this research was timely.
During the interview, we let participants guide the discussion towards the experiences that were most salient to them.
インタビュー中、参加者は最も有意義な体験に向けて議論をガイドできるようにしました。
0.69
We stored data on Google drive and restricted access to the research team.
Googleドライブにデータを保存し、研究チームへのアクセスを制限しました。
0.66
We also took care to anonymize the data and report them in this paper.
我々はまた、データを匿名化し、この論文に報告することにも注意した。
0.65
We intentionally do not specify the names of loan platforms that we recruited users from to preserve anonymity.
匿名性を維持するためにユーザを募集したローンプラットフォームの名称は意図的に指定していません。
0.64
Although we attempted to recruit participants across gender, our sample skews more towards men.
男女にまたがる参加者を募集しようとしたが、サンプルは男性に偏っている。
0.68
We also do not have any perspectives from nonbinary identifying individuals.
我々はまた、非バイナリ識別個人からの視点も持っていない。
0.55
Due to the COVID-19 pandemic, we conducted all our interviews over video and phone, which limited our ability to include observations and contextual inquiry.
4 CONTEXT 4.1 Participant Demographics 10 participants belonged to urban-middle income groups, and the rest 19 participants belonged to urban-low or lower-middle income groups.
25 of our participants worked in the service sectors as accountants and chefs in restaurants, carpenters, customer service, sales and marketing representatives, tailors, taxi and auto-rickshaw drivers, or owned small businesses.
2 participants worked in health and education sectors, and 2 participants identied their primary roles as “house-wives.
2人は保健・教育分野で働き、2人は主役を「主婦」として特定した。
0.57
" All our participants incurred signicant loss of incomes during the pandemic.
「すべての参加者がパンデミックの最中に失業者の所得を減らした。」
0.57
Several participants (n=16) were responsible for supporting 4-5 member households with reduced or no incomes; they had pledged or sold the few assets they possessed, and in few cases, the very assets that were sources of income to them.
In addition, vulnerability for them meant having to comply with exploitative rules from informal lenders, from their children’s schools, from local state oces, and participants having no monetary or social capital to even claim their rights.
4.2 Instant Loan Applications Instant loan platforms, primarily classied as ‘FinTech’ provide technical infrastructure to connect NBFCs (shadow banking entities that oer nancial services without a banking license) [1] with borrowers.
They oer small, short term loans, typically INR 500 - 500,000 over a period of 15 days - 6 months, using machine learning on a combination of CIBIL scores3, and ‘alternative credit data.’ Although the workings of these loan apps are proprietary, most instant loan apps, in their privacy policy, disclose using the following as ‘alternative credit data’: ‘know-your-customer’ (KYC) data such as names, addresses, phone numbers, PIN codes, reference contacts, photos, and videos, personal account number (PAN), Aadhar number (unique identication number); device information such as location, hardware model, build model, RAM, storage, unique device identiers like Advertising ID, SIM information that includes network operator, WIFI and mobile network information, cookies; nancial SMS sent by 6-digit alphanumeric senders, and information obtained from 3rd party providers for making credit decisions [2, 4–7].
They oer small, short term loans, typically INR 500 - 500,000 over a period of 15 days - 6 months, using machine learning on a combination of CIBIL scores3, and ‘alternative credit data.’ Although the workings of these loan apps are proprietary, most instant loan apps, in their privacy policy, disclose using the following as ‘alternative credit data’: ‘know-your-customer’ (KYC) data such as names, addresses, phone numbers, PIN codes, reference contacts, photos, and videos, personal account number (PAN), Aadhar number (unique identication number); device information such as location, hardware model, build model, RAM, storage, unique device identiers like Advertising ID, SIM information that includes network operator, WIFI and mobile network information, cookies; nancial SMS sent by 6-digit alphanumeric senders, and information obtained from 3rd party providers for making credit decisions [2, 4–7]. 訳抜け防止モード: 短期融資は、通常15日~6ヶ月の期間に500~50万円程度である。 cibil score3と‘代替クレジットデータ’の組み合わせで機械学習を使用する。 これらのローンアプリはプロプライエタリだが、ほとんどのインスタントローンアプリはプロプライエタリだ。 プライバシポリシでは、以下のことを‘代替クレジットデータ’として公開する。 ‘ know - your - customer ’(kyc )データ。 住所、電話番号、pinコード、参照連絡先、写真、ビデオ。 個人口座番号(pan)、アダール番号(ユニークな識別番号);位置情報などのデバイス情報 ハードウェアモデル、ビルドモデル、ram、ストレージ、広告idのようなユニークなデバイスid、 ネットワークオペレータ、wifi、モバイルネットワーク情報を含むsim情報。 クッキー (cookies) ; 6桁のアルファ数字送信者から送信されるsmsと、信用決定を行うサードパーティプロバイダから得られた情報 [2, 4–7]。
0.71
Applications also use this data for analyzing user behavior for advertising and security purposes.
アプリケーションは、広告やセキュリティの目的でユーザーの行動を分析するのにこのデータを使う。
0.68
Apps use AI in several other ways including use facial recognition for completing verication, natural language processing for information extraction and contract automation, machine learning for fraud detection and market analysis, and chatbots to provide customer service [3].
These platforms, targeted at borrowers from low and middle income groups, have proliferated the market recently, and are hailed by the state as the ‘drivers of economic growth’ for the ‘unbanked’ India [11].
We found that this promise was the precursor to a cycle of reciprocal exchanges between loan platforms and the users, which we discuss with the help of the following themes: (1) Perceived Benets of Instant Loan Platforms, (2) Perceived Obligations to Instant Loan Platform, and (3) Dependence on Instant Loan Platforms.
Platforms Participants who were successful in availing instant loans through the applications expressed great excitement and gratitude towards these platforms.
While many of our participants faced signicant nancial hardships even before the COVID-19 pandemic, almost all of them experienced exacerbated diculties during the pandemic.
These platforms also oered attractive benets, giving our participants the perceptions of loans with no-strings attached.
これらのプラットフォームは魅力的なベネストを駆使し、参加者に無弦のローンに対する認識を与えました。
0.44
We highlight perceived benets of instant loan platforms with the help of the following codes:
我々は,次のようなコードを用いて,インスタントローンプラットフォームの好意性を強調する。
0.60
1) Being able to access anytime, anywhere,
1)いつでもどこでもアクセスできます。
0.83
2) Ensuring dignity and privacy,
2)尊厳とプライバシーの確保。
0.72
3) Preserving social ties, and
3 社会的結びつきの維持、及び
0.66
4) Promising social mobility.
4)ソーシャルモビリティの促進。
0.77
3credit scores generated by the Credit Information Bureau India Limited
インド信用情報局が作成する3つの信用スコア
0.81
英語(論文から抽出)
日本語訳
スコア
How Platform-User Power Relations Shape Algorithmic Accountability
プラットフォーム・ユーザ・パワー・リレーションシップのアルゴリズム・アカウンタビリティ
0.50
FAccT ’22, June 21–24, 2022, Seoul, Republic of Korea
FAccT'22, 6月21-24, 2022, ソウル, 大韓民国
0.85
5.1.1 Being able to access money anytime, anywhere.
5.1.1 いつでもどこでも金にアクセスできる。
0.63
Participants’ enthusiasm for instant loans often highlighted their distrust in formal banking sectors, a nding also reported in research on nancial experiences of other vulnerable populations [95, 118].
Our participants despised extensive verication processes of formal loans.
参加者は形式的融資の広範囲な検証過程を軽視した。
0.46
Formal loan processes required applicants to submit a long list of identity verication documents such as birth certicates, caste certicates, assets documents, employment certicates.
Finding the right set of documents to produce is never an easy task for anyone, and was exceptionally dicult for those participants who had lower levels of education and literacy.
Further, participants were required to seek willing guarantors who would support their applications, open up their homes to unannounced visits from loan ocers, and haggle with them for weeks, even after which there was no guarantee of a loan.
There are a lot of internal things that we don’t understand.
私たちが理解していない内部的なものはたくさんあります。
0.83
" In contrast, instant loans arrived into users’ bank accounts within minutes of them making requests. While the requirements of dierent apps varied slightly, participants generally recalled providing minimal details such as the their names, addresses, phone numbers, seles, permanent account numbers (PAN) and Aadhar card numbers (unique identication numbers). Given the convenience of loan apps, participants anecdotally mentioned borrowing loans from quaint locations such as under the streetlights and bathrooms at midnight. Such instant money was a boon for participants like P10 during dire circumstances: "[W]hen I installed the app, the rst thing I was happy about - instant cash.
" In contrast, instant loans arrived into users’ bank accounts within minutes of them making requests. While the requirements of dierent apps varied slightly, participants generally recalled providing minimal details such as the their names, addresses, phone numbers, seles, permanent account numbers (PAN) and Aadhar card numbers (unique identication numbers). Given the convenience of loan apps, participants anecdotally mentioned borrowing loans from quaint locations such as under the streetlights and bathrooms at midnight. Such instant money was a boon for participants like P10 during dire circumstances: "[W]hen I installed the app, the rst thing I was happy about - instant cash. 訳抜け防止モード: 「対照的に、インスタントローンは請求の数分以内にユーザーの銀行口座に届きました。 ディシレントアプリの要件は微妙に異なるが、参加者は一般的に名前などの最小限の詳細を思い出した。 住所、電話番号、電話番号、永久口座番号(pan)。 そして、aadharカード番号(ユニークな識別番号)。 参加者は、真夜中に街灯や浴室の下など、静かな場所からの借入について言及している。 このような即時的なお金は,p10のような参加者にとって,悲惨な状況下での恩恵でした。 私が一番嬉しかったのは、インスタントキャッシュです。
0.75
Because it was the urge of the money at that point of time.
当時の金の衝動だったからである。
0.42
And I got within 5 minutes! Believe it or not 5 minutes, I received the money.
5分以内に! 5分もしないうちにそのお金をもらった。
0.56
" 5.1.2 Ensuring dignity and privacy. Several participants who praised the features of instant loans shared their experiences with local money lenders or pawn brokers, whom they had turned to due to diculties in getting loans from banks. However, local lenders had often charged high interest rates, demanded repayment on their whims, and had sometimes employed aggressive recovery tactics such as visiting homes, and harassing our participants and their families. P3 recalled, “I had taken once a 2000 rupees (USD 27) loan [from a local lender], and they had charged me 500 interest that I was supposed to repay within a month. They were harassing me, and wanted the money immediately."
" 5.1.2 Ensuring dignity and privacy. Several participants who praised the features of instant loans shared their experiences with local money lenders or pawn brokers, whom they had turned to due to diculties in getting loans from banks. However, local lenders had often charged high interest rates, demanded repayment on their whims, and had sometimes employed aggressive recovery tactics such as visiting homes, and harassing our participants and their families. P3 recalled, “I had taken once a 2000 rupees (USD 27) loan [from a local lender], and they had charged me 500 interest that I was supposed to repay within a month. They were harassing me, and wanted the money immediately." 訳抜け防止モード: 「5.1.2 尊厳とプライバシーの確保。 インスタントローンの特徴を賞賛する参加者は、地元の銀行家やポーンブローカーと自らの経験を共有した。 銀行から融資を受けることの困難さが原因だった。 しかし、地元の銀行はしばしば高い金利を請求し、その気まぐれに対する返済を要求していた。 時折 自宅訪問などの積極的な 回復策を採っていました 参加者とその家族を嫌がらせしました。 2000ルピー(usd 27)の融資を1回受け取りました。 1ヶ月以内に返金すると500人の利子を請求した 嫌がらせしてた すぐにお金を欲しがった」と述べた。
0.74
Fearing reputation harm from local lenders and gossip in their social circles, participants avoided seeking such loans except in unavoidable circumstances.
Instant loan applications, by nature of being on users’ mobile devices, offered a high degree of privacy that was previously unavailable for participants.
Our participants welcomed such features as hallmarks of borrower-friendlines s.
参加者は借り手の親しみの目印などの特徴を歓迎した。
0.47
P3, who was quoted previously contrasted 4Having access to birth certicates and caste certicates is highly correlated with class, caste and socio-economic status in India.
P3は、4Having access to birth certi'cates and caste certi'catesと、インドにおける階級、カースト、社会経済的地位と高い相関関係にある。
0.70
As of 2016, 62.3% of children under the age of 5 had birth certicates, and 69.1% of all household members had aadhar cards [65]
their experiences, “Whether your loan amount is large or small, [instant loan platforms] will give you some time to repay... [When local lenders were harassing me], I took this instant loan.
[The app] gave me three months time to repay the money and interest was also just 300 rupees.
アプリはお金を返済するのに3ヶ月の時間を与え、利息もたった300ルピーでした。
0.72
" In addition, the promise of digital transactions instilled hopes of digital repercussions on defaulting, like a meagre impact on participants’ CIBIL credit scores, thus increasing their overall comfort in borrowing instant loans. 5.1.3 Preserving social ties. Almost all our participants reported routinely turning to their closest circles during times of need. However, such lending-and-borrowin g was riddled with complexities. First, it was dicult for participants to even muster the courage to ask their social circles. In social circles, small loans were indicative of participants’ inability to manage their households, and thus hurt their respectability. When P14 asked relatives for help with her child’s education, she received unsolicited advice in the guise of care: “[They said], ‘why do you want to send [your kid] to that school paying high fees in this critical situation? You can just shift [switch] to government (public) school."
" In addition, the promise of digital transactions instilled hopes of digital repercussions on defaulting, like a meagre impact on participants’ CIBIL credit scores, thus increasing their overall comfort in borrowing instant loans. 5.1.3 Preserving social ties. Almost all our participants reported routinely turning to their closest circles during times of need. However, such lending-and-borrowin g was riddled with complexities. First, it was dicult for participants to even muster the courage to ask their social circles. In social circles, small loans were indicative of participants’ inability to manage their households, and thus hurt their respectability. When P14 asked relatives for help with her child’s education, she received unsolicited advice in the guise of care: “[They said], ‘why do you want to send [your kid] to that school paying high fees in this critical situation? You can just shift [switch] to government (public) school." 訳抜け防止モード: 「さらに、デジタル取引の約束は、デフォールトにデジタル影響の希望を植え付けている。」 参加者のcibilクレジットスコアに対する単純な影響のように、全体的な快適感を高めます。 インスタントローンを借りる。 5.1.3 社会的なつながりの保存. 参加者のほとんどが,必要な時にいつも最寄りの円に目を向けていると報告した。しかし,そのような貸付 - そして、借り物は複雑に満ちていた。まず、参加者が社会的な円に問いかける勇気さえも必要だった。社会的な円の中で。 少額の融資は、参加者が家族を管理することができないことを示している。 p14が親類に子供の教育の助けを求めたとき、彼らは尊敬を害した。 彼女は不注意な態度で不当な助言を受けた なぜあなたが望むのか? この危機的状況で 高校に高給で送るために? 学校を政府(公立)に移すだけ」と語っている。
0.72
Public schools in India oer free education, and are often viewed as schools for children from lower socio-economic backgrounds. Education is highly regarded as a mobility tool for the middle classes in India. Hence, sending children to well-regarded private schools is both a responsibility, and a matter for pride for parents, leading P14 to perceive the advice as derogatory. Given such humiliating experiences, participants equated borrowing from social circles with pledging their “self-respect."
Public schools in India oer free education, and are often viewed as schools for children from lower socio-economic backgrounds. Education is highly regarded as a mobility tool for the middle classes in India. Hence, sending children to well-regarded private schools is both a responsibility, and a matter for pride for parents, leading P14 to perceive the advice as derogatory. Given such humiliating experiences, participants equated borrowing from social circles with pledging their “self-respect." 訳抜け防止モード: インドの公立学校は自由教育を受けており、社会学の低い児童のための学校と見なされることが多い。 教育はインドの中産階級の移動手段として高く評価されている。 そのため、子供たちを元気に送る-私塾 両親の責任とプライドの問題です 軽蔑とみなすためにP14を導いた。 参加者は、社会的サークルから「自己-尊敬」を借用した。
0.78
Naturally, when their requests for money were unmet, participants like P9 dealt with extreme feelings of rejection that strained relationships: “I knew they had the money, and they still refused. That is why I don’t feel like asking anyone money... Earlier I used to keep in touch regularly, now that bonding is not there."
Small, predetermined loans oered by instant loan platforms removed burdens of ask, and alleviated worries of social image for participants. In addition, participants hoped that the ‘instant’ nature of loans would ensure that they did not dwell on their feelings in case of rejection.
As P26 said, “If I get [the loan] I’m lucky enough.
P26が言ったように、“ローンが手に入ったら十分ラッキーだ”。
0.69
If not, [...] [t]here’s no risk involved.
そうでなければ, [...] [t] リスクはありません。
0.68
[...] [Next day], there is a tendency that you could forget also.
[[...][次の日]も忘れてしまう傾向があります。
0.47
You would just move o (on).
o' (on) を移動させます。
0.58
" 5.1.4 Promising social mobility. Some participants had been lured into the credit system previously through shopping and entertainment, but such experiences had rarely ended well. They had subscribed to comforts and generous credit limit increases without worrying about monthly EMI payments. Few users had understood how credit cards worked, what credit scores meant, and the implications of defaulting on their credit bills. P20 learned about the implications of credit scoring system through negative experiences: “First we got INR 5000 from [nancial institution], which gradually increased to INR 15,000 [and nally reached] INR 1,60,000. I purchased many products through [credit][...]. I now have [a credit score of] 600. Because of this, nobody is giving us loans."
" 5.1.4 Promising social mobility. Some participants had been lured into the credit system previously through shopping and entertainment, but such experiences had rarely ended well. They had subscribed to comforts and generous credit limit increases without worrying about monthly EMI payments. Few users had understood how credit cards worked, what credit scores meant, and the implications of defaulting on their credit bills. P20 learned about the implications of credit scoring system through negative experiences: “First we got INR 5000 from [nancial institution], which gradually increased to INR 15,000 [and nally reached] INR 1,60,000. I purchased many products through [credit][...]. I now have [a credit score of] 600. Because of this, nobody is giving us loans." 訳抜け防止モード: 『5.1.4 ソーシャルモビリティの実証』 一部の参加者は以前、ショッピングやエンターテイメントを通じて信用システムに誘われていた。 しかし、そのような経験がうまくいったことはめったになく、彼らは快適さと寛大な信用限度を 毎月のEMI支払いを心配しています クレジットカードがどのように機能したのか、クレジットカードのスコアが何を意味するのか、クレジットカードのデフォルトがどのような意味を持つのかを理解できたユーザーはほとんどいなかった。 P20は、負の体験を通して信用スコアシステムの意味について学んだ : 「まずは、[財務機関]からInR 5000を得た」 これは徐々にINR 15,000に増加し、INR 1,60,000となった。 私は多くの商品を[クレジット]で買いました. 今では600点ですから。 誰も融資をしない
0.62
Participants with middle class aspirations were often fearful, and expressed aversion to incurring large debts.
中流階級の志望者はしばしば恐れられ、大きな負債を負うことへの嫌悪を表明した。
0.57
However, as P08 put it, seemingly small loans oered by instant loan platforms were necessary evils that could help users build credit and achieve dreams of mobility: “[B]anks should grant us loans in the future.
[...] If we don’t take a loan, [credit score] will go in the negative.
[...]もしローンを取らなければ、[クレディットスコア]はマイナスになるでしょう。
0.52
" Participants thus reported seeking
『捜索を報告した参加者』
0.46
英語(論文から抽出)
日本語訳
スコア
FAccT ’22, June 21–24, 2022, Seoul, Republic of Korea
FAccT'22, 6月21-24, 2022, ソウル, 大韓民国
0.85
Ramesh et al instant loans for their children’s education, for upgrading their comforts, and to secure nancial independence.
Rameshら 子どもの教育、快適さのアップグレード、そして南の独立の確保のための即時融資。
0.43
5.2 Perceived Obligations to Instant Loan
5.2 インスタントローンに対する義務の認識
0.63
Platforms Instant loan platforms were sources of immediate money and also the only means of survival for participants during dicult times when they ran from pillars to posts to seek nancial help.
Participants used instant loans to manage their everyday expenses, ranging from buying groceries, paying for school fees of their children, to clearing outstanding debts.
Thus, as in the old adage, several participants equated the loan platforms with friends, and expressed intense feelings of indebtedness towards loan companies.
P01, who sought instant loans when his business went haywire said, “It really helped me during my tough times, so I actually owe them and I’m actually [still] owing them...
In the anticipation of ‘instant’ money, several participants acknowledged that they had simply clicked on ‘I agree’ to terms and conditions of the apps, without expending the slightest eort to understand what they were consenting to.
For instance, P4 explained how he generated 3D views of his face instructed by a facial recognition bot, “It will ask for a sele.
たとえばP4は、顔認識ボットに指示された顔の3Dビューをどうやって生成したのかを説明している。
0.72
Turn both the sides, open the mouth... blink your eyes, rotate your head.
両側面を回して口を開け...目をまぶし、頭を回す。
0.77
" Obliging such requests were mandatory if participants wished to proceed with their applications. Some others recalled agreeing to potential legal actions and home visits in the case of defaulting on small loans. Such acceptances were viewed as mere formalities in getting access to progressively large credit limits. For instance, P3 recalled their speedy acceptance of terms, and the subsequent ascent in credit limits, “[I]f you’re not repaying the loan then they will take the legal action. They can also come to the home, and you have to pay a penalty of eight rupees per day... [Y]ou have to say okay to all these things. [...] After this, they will give you 500 (USD 7) rupees rst. Once you repay, they will give you 1000 (USD 15) (and so on)."
" Obliging such requests were mandatory if participants wished to proceed with their applications. Some others recalled agreeing to potential legal actions and home visits in the case of defaulting on small loans. Such acceptances were viewed as mere formalities in getting access to progressively large credit limits. For instance, P3 recalled their speedy acceptance of terms, and the subsequent ascent in credit limits, “[I]f you’re not repaying the loan then they will take the legal action. They can also come to the home, and you have to pay a penalty of eight rupees per day... [Y]ou have to say okay to all these things. [...] After this, they will give you 500 (USD 7) rupees rst. Once you repay, they will give you 1000 (USD 15) (and so on)." 訳抜け防止モード: 「申込書の提出を希望する場合は、これを義務づける。 一部は、小規模ローンのデフォルトの場合に、法的措置や訪問の可能性について同意したことを思い出した。 このような受け入れは、段階的に大きなクレジット制限を受けるための単なる形式と見なされた。 p3は、条件の迅速な受け入れを思い起こし、その後のクレジット制限の上昇は「 [i]f you’re paying the loan then they would take the legal action.」である。 家にも来れるし 1日8ルピーの罰金を払わなければなりません ... [y]これらのすべてのことには、大丈夫と言わなければなりません。 [[...] このあと 500ドル (usd 7 ) のルピーを返金します 1000 (usd 15 ) (その他) を与えます。
0.74
Quite naturally, our participants did not expect to negotiate the terms and conditions of instant loans.
当然のことながら、参加者は即時融資の条件や条件について交渉するつもりはない。
0.61
In fact, they believed that if they were being oered money during a nancially dicult time, they had an obligation to accept all the terms and conditions associated with the money.
In addition, almost all our participants strongly believed that regardless of the terms and conditions, a loan once sought must be rightfully returned to its owner to ‘restore justice.’ Consequently, our participants assumed all responsibility for a loan borrowed, and frequently associated defaulting on loans with ideas of ‘cheating’ and ‘injustice’ to the lender.
Several loan platforms also required access to users’ media and gallery, phone books, Whatsapp and Gmail contacts, location information, nancial transaction texts, app usage analytics, and other device information.
P16 explained, “It is okay if they collect information.
P16は次のように説明している。
0.46
If I have an intention to cheat then I should be scared.
もし浮気するつもりがあるなら、私は怖がるべきです。
0.55
[...] If I am willing to repay fairly, I need not be scared.
もし私が公平に返済したいなら、恐れる必要はない。
0.45
" We also found that participants’ mental models of instant loans shaped their data-sharing practices in complex ways. Several participants expressed some discomfort sharing such data. Some associated sensitive data with ideas of ‘intimacy.’ As P15 put it, “If they are tracking where I’m going and what I’m doing, it’s like sharing my family background (colloquial: wife’s background) with them."
Others discussed fears of misuse and online scams.
誤用やオンライン詐欺の恐れを論じる者もいた。
0.55
Yet, all participants had either already enabled permissions unknowingly, or showed willingness to do so.
しかし、すべての参加者は無知の許可を既に許可していたか、あるいはそれを行う意志を示した。
0.62
P24 weighed her discomfort against the need for money and arrived at a compromise, “I got a thought that they will hack.
p24は彼女のお金の必要性に対する不快感を重んじ、妥協案にたどり着いた。
0.50
But at that time money was important... [N]ow in home loan we pledge papers, in gold loan we pledge gold, in the same way digitally we have to pledge all our information.
" Being used to models of lending and borrowing where trust in the exchange was facilitated through the value of pledged assets, our participants ‘pledged’ sensitive data as high-credibility collateral assets. 5.2.3 Making high fee payments. Instant loans came at high initial costs to participants. Platforms charged processing fees, disbursal fees, down-payments, often taking away 20-25% of loan amounts during disbursal. This was in addition to the high oating interest rates (15 - 35%) and penalty charged by platforms. For P10, these high fees were small costs of the convenience during what were dicult times for her: “in case we don’t pay consecutively for a month, some charges are there, but I wouldn’t call it a disadvantage. When you are getting all these advantages that’s a common thing. That’s perfectly ne."
" Being used to models of lending and borrowing where trust in the exchange was facilitated through the value of pledged assets, our participants ‘pledged’ sensitive data as high-credibility collateral assets. 5.2.3 Making high fee payments. Instant loans came at high initial costs to participants. Platforms charged processing fees, disbursal fees, down-payments, often taking away 20-25% of loan amounts during disbursal. This was in addition to the high oating interest rates (15 - 35%) and penalty charged by platforms. For P10, these high fees were small costs of the convenience during what were dicult times for her: “in case we don’t pay consecutively for a month, some charges are there, but I wouldn’t call it a disadvantage. When you are getting all these advantages that’s a common thing. That’s perfectly ne." 訳抜け防止モード: 「貸し借りの模型」 交換に対する信頼は 誓約された資産の価値によって促進されました 参加者は機密データを高い信頼性の担保資産として「約束」した。 5.2.3 高手数料支払い インスタントローンは、参加者に高い初期費用を課した。 プラットフォームは処理手数料、支払い手数料、ダウン-支払い 返済時に20~25%の融資額をしばしば取り除くことがあり、高い利率(15~35%)に加えていた。 P10は、この高額な手数料は、彼女のためにダイアルト時間であった間の利便性の小さなコストであった。 1ヶ月連続して支払わない。 いくつか罪状がある しかし、私はそれを不利とは呼ばない。 これらすべてのメリットを手に入れる場合、それは一般的なことです。 それは完璧だ」と述べた。
0.72
In addition, participants discussed how repeatedly borrowing through the same platform easily oset such costs.
They received attractive benets like promotional codes, discounts, and better terms on new loans as rewards for their loyalty; these reciprocal exchanges were perceived as mutually benecial by participants.
Thus, several participants developed emotional attachments to loan platforms.
このように、数人の参加者がローンプラットフォームへの感情的な愛着を発達させた。
0.33
P01’s emotional attachment nudged him towards safeguarding the interests of loan platforms through high fees, “I wouldn’t recommend this app to a person who doesn’t have intention to pay [high fees]...
[...] That’s how the [platform] will be able to give salary to their associates and the people supporting them.
こうすることで、[プラットフォーム]は、仲間やサポートする人たちに給与を支払うことができるのです。
0.65
" 5.3 Dependence on Instant Loan Platforms FinTech companies often tout narratives of nancial inclusion for ‘unbanked’ users through instant loans [5–7].
Participants circumvented barriers to access loans, borrowed cyclically through loan platforms, tolerated abuse from predatory lenders, and shared responsibility for their negative experiences, potentially leading to their nancial and technology exclusion.
How Platform-User Power Relations Shape Algorithmic Accountability
プラットフォーム・ユーザ・パワー・リレーションシップのアルゴリズム・アカウンタビリティ
0.50
FAccT ’22, June 21–24, 2022, Seoul, Republic of Korea
FAccT'22, 6月21-24, 2022, ソウル, 大韓民国
0.85
discuss these ndings with the following codes:
以下のコードでこれらを議論する。
0.72
1) Circumventing algorithmic discrimination,
1)アルゴリズムによる差別の回避
0.75
2) Recurring debts, 3) Tolerating abuse, and
2)債務の返済 3)虐待の許容,及び
0.73
4) Assuming responsibility for loan platforms’ failures.
4)融資プラットフォームの失敗の責任を負うこと。
0.76
5.3.1 Circumventing algorithmic discrimination.
5.3.1 アルゴリズムによる差別の回避。
0.45
While instant loan apps are designed as single-user applications, we found evidence of intermediated use among our participants, as is commonly reported in previous research on technology-use in the Global South [12, 41, 72, 110].
Participants sought the help of others, often immediate family members or trusted close friends, to download and navigate the apps, submit their applications, and manage payments.
For instance, P10 borrowed through her husband’s phone after intuitively recognizing that instant loan limits could be inuenced by gender pay gap, and gendered patterns of digital activities.
" While such stereotype reinforcement through ADS could be viewed as potential barriers to access, evoking strong reactions in the West [22], our participants did not perceive them so. In fact, they underscored the importance of reading intentions in attributing experiences to discrimination. P07 acknowledged that disparate treatment could lead to unfair outcomes, but asserted that instant loan platforms did not intentionally discriminate based on gender: “If they are giving less to women and more to men, it will not be correct. [...] But when it comes to loans, mostly they will give equal amounts to everybody."
" While such stereotype reinforcement through ADS could be viewed as potential barriers to access, evoking strong reactions in the West [22], our participants did not perceive them so. In fact, they underscored the importance of reading intentions in attributing experiences to discrimination. P07 acknowledged that disparate treatment could lead to unfair outcomes, but asserted that instant loan platforms did not intentionally discriminate based on gender: “If they are giving less to women and more to men, it will not be correct. [...] But when it comes to loans, mostly they will give equal amounts to everybody." 訳抜け防止モード: 「ADSによるこのようなステレオタイプ強化は、アクセスの潜在的障壁と見なすことができる。 西方[22]で強い反応を起こす. P07は、差別的な治療が不公平な結果をもたらす可能性があることを認めたが、インスタントローンプラットフォームは、ジェンダーに基づいて意図的に差別をしていないと主張した。 女性や男性に多く与えている場合、それは正しくない。 しかし、融資に関して言えば、彼らはほとんど全員に平等な金額を与えるだろう」と述べた。
0.63
Our participants shared similar views towards other issues of algorithmic discrimination.
参加者は、アルゴリズム差別の他の問題に対する同様の見解を共有した。
0.52
P08, who identied as dark-skinned, talked about whitening their face digitally to get around potential intersectional accuracy disparities [26] in instant loan technology: “We could use ‘FaceApp’ to modify our looks.
If they (loan platforms) are grand thieves, we are petty thieves.
もし彼らが(ローン・プラットフォーム)大泥棒なら、私たちは小柄な泥棒です。
0.51
That is the only dierence.
それが唯一のジレンスです。
0.67
" 5.3.2 Recurring debts. Instant loan platforms made borrowing money pleasurable for participants by oering gamied engagement. In addition to in-app discounts, surprise oers and virtual coins that we discussed earlier, we found that some apps also gamied credit limit increases; the platforms would rst oer small amounts like INR 5,000-10,000 (USD 75-135); users would then ‘unlock’ higher credit limits when they neared their repayment terms, mimicking level-increases in virtual games. Some participants suspected that such gamied mechanisms were recovery nudges in disguise. P7 with an outstanding loan of INR 10,000 (USD 135) noted, “Now that INR 2,00,000 (USD 2700) lock has opened up... I feel they opened the lock to show me that I will be eligible for a larger amount when I close this loan."
" 5.3.2 Recurring debts. Instant loan platforms made borrowing money pleasurable for participants by oering gamied engagement. In addition to in-app discounts, surprise oers and virtual coins that we discussed earlier, we found that some apps also gamied credit limit increases; the platforms would rst oer small amounts like INR 5,000-10,000 (USD 75-135); users would then ‘unlock’ higher credit limits when they neared their repayment terms, mimicking level-increases in virtual games. Some participants suspected that such gamied mechanisms were recovery nudges in disguise. P7 with an outstanding loan of INR 10,000 (USD 135) noted, “Now that INR 2,00,000 (USD 2700) lock has opened up... I feel they opened the lock to show me that I will be eligible for a larger amount when I close this loan." 訳抜け防止モード: 「5.3.2 返済債務 borrowing~ エンゲージメントのオシャリングで参加者を喜ばせる。 in-アプリ割引に加えて。 さっき話した 驚きのオウアーズと バーチャルコインは 一部のアプリは信用限度を超過している InR 5,000 - 10,000 (USD 75 - 135 ) のような小額のプラットフォームは、返済条件に近づき、レベルを模倣して高額のクレジット制限を解除する。 何人かの被験者は、このようなカミシャイド機構は偽装の回復ナッジであると疑った。 P7はINR10,000(USD 135 )の巨額の融資を受けた。 INR 2,00,000 (USD 2700 ) のロックがオープン 門を開けた気がする この融資を完了したときには、もっと大きな金額を支払う資格がある」と述べた。
0.76
We found that these mechanisms, in addition to ready acceptances of instant loans by our participants, had led several of them to borrow beyond their capacity.
Such participants had then engaged in cyclical borrowing from several dierent apps to ‘balance’ their loans.
参加者はその後、複数のアプリから定期的な借り入れを行い、ローンのバランスを取っていた。
0.55
P23 had once gotten into an addictive rhythm of unlocking higher credit limits in the loan apps: “[Let’s say] we have cleared the rst level, so it seems like they have condence in us, and have automatically increased the limit.
[...] I didn’t realize it then, and would end up accepting the loan in a hurry.
[...]その時私は気づかなかったので、急いでローンを受け取りました。
0.53
[...] I would run here and there and borrow from friends to repay the loan.
私はこことそこを走り、ローンを返済するために友人から借りました。
0.63
[...] I would take a loan from another app to repay
[[...]他のアプリから借りて返金します
0.49
the friend. I had loans from 4 apps at one point.
友人だ 私は一度に4つのアプリからローンを受け取りました。
0.56
" Other participants didn’t consider themselves ‘addicted’ to instant loans, but regretted cyclical borrowing. They justied recurring loans as unavoidable by-products of their nancial vulnerability and social obligations. 5.3.3 Tolerating abuse. We also found evidence of abuse in our study. Through loan platforms, some participants fell prey to predatory lenders who employed aggressive recovery tactics for small amounts of money, as little as INR 2000 (USD 27). Their tactics included repeatedly harassing borrowers for repayments through calls and texts, issuing threats of legal action, broadcasting sensitive information to borrowers’ contacts on WhatsApp and other social media, shaming defaulters, targeted harassment of borrowers’ contacts, and home visits. Digital medium allowed predatory lenders to abuse borrowers at scale. For instance, lenders performed semantic association on borrowers’ contact lists to identify their close contacts (sometimes inaccurately) and harass them. Such tactics caused immense emotional and reputation harm to participants, and damaged their dignity. P4 encountered stigmatization in their social circles: “They contacted my friends and family through WhatsApp. They shared my photo and published my details saying I had taken loan and hadn’t repaid and started harassing them... Because of this I lost a lot of friends. I even had troubles with relatives. I ended up losing my job. [...] I was very upset but did not share it with anyone. At one point, I even tried to commit suicide."
" Other participants didn’t consider themselves ‘addicted’ to instant loans, but regretted cyclical borrowing. They justied recurring loans as unavoidable by-products of their nancial vulnerability and social obligations. 5.3.3 Tolerating abuse. We also found evidence of abuse in our study. Through loan platforms, some participants fell prey to predatory lenders who employed aggressive recovery tactics for small amounts of money, as little as INR 2000 (USD 27). Their tactics included repeatedly harassing borrowers for repayments through calls and texts, issuing threats of legal action, broadcasting sensitive information to borrowers’ contacts on WhatsApp and other social media, shaming defaulters, targeted harassment of borrowers’ contacts, and home visits. Digital medium allowed predatory lenders to abuse borrowers at scale. For instance, lenders performed semantic association on borrowers’ contact lists to identify their close contacts (sometimes inaccurately) and harass them. Such tactics caused immense emotional and reputation harm to participants, and damaged their dignity. P4 encountered stigmatization in their social circles: “They contacted my friends and family through WhatsApp. They shared my photo and published my details saying I had taken loan and hadn’t repaid and started harassing them... Because of this I lost a lot of friends. I even had troubles with relatives. I ended up losing my job. [...] I was very upset but did not share it with anyone. At one point, I even tried to commit suicide." 訳抜け防止モード: 他の参加者は、自らがインスタントローンに「中毒」しているとは考えませんでした。 しかし、繰り返し借り入れを後悔している。彼らは、繰り返し借り入れは避けられない ― 自己資本の脆弱性と社会的義務の産物である。 5.3.3 寛容な虐待。 融資プラットフォームを通じて、一部の参加者は、少量の資金のために積極的な回復戦略を採った捕食的融資業者を捕食した。 INR 2000 (USD 27 ) に限らず、彼らの戦術には、電話やテキストメッセージを通じて借り手の返済を何度も嫌がらせすることが含まれていた。 法的措置の脅威を発し、借り手の連絡先に機密情報をブロードキャストする。 デフォルト設定、借り手の連絡先の嫌がらせ、訪問などだ。 デジタル媒体は、貸主が大規模に借主を虐待することを許可している。例えば、借り手の連絡先リスト上でセマンティック・アソシエーションを行い、身近な連絡先を識別する(時には不正確なこともある)。 このような戦術は参加者に大きな感情と評判を害しました 彼らは私の友人や家族にWhatsAppを通じて連絡した。 彼らは私の写真を共有し、私が借りたと言った詳細を公表しました 返済を怠って嫌がらせを始めていたので、私は多くの友人を失った。 親戚にも困った。結局職を失った。 [...]私はとても腹を立てたが、誰とも共有しなかった。 自殺も試みた」と述べた。
0.66
Unfortunately, until December 2020, instant loan platforms had received little attention from the Reserve Bank of India (RBI).
They simply believe us based on what we type (our data) [...].
彼らは、私たちがタイプしたもの(私たちのデータ) [...] に基づいて私たちを信じるだけです。
0.61
So, we cannot nd fault with the app.
ですから、アプリに欠陥を負うことはできないのです。
0.61
[...] This app helps us when all others have abandoned us.
このアプリは、他のみんなが私たちを捨てた時に役立つ。
0.71
" 5.3.4 Assuming responsibility for loan platforms’ failures. Our participants often viewed their negative experiences through the lenses of ‘incompetency’, and assigned self-blame for their experiences. Even if loan platforms were at fault, they were seemingly oering loans with no asset requirements; therefore, any rule or tactic was justiable. P23, a survivor of abuse from a predatory loan platform reected on their learnings: “I was very rm about not availing app loans but since my friend suggested it I took it. [...] I did not think about whether we could repay the loan during dicult times. Corona has taught me a very good lesson."
" 5.3.4 Assuming responsibility for loan platforms’ failures. Our participants often viewed their negative experiences through the lenses of ‘incompetency’, and assigned self-blame for their experiences. Even if loan platforms were at fault, they were seemingly oering loans with no asset requirements; therefore, any rule or tactic was justiable. P23, a survivor of abuse from a predatory loan platform reected on their learnings: “I was very rm about not availing app loans but since my friend suggested it I took it. [...] I did not think about whether we could repay the loan during dicult times. Corona has taught me a very good lesson." 訳抜け防止モード: 「5.3.4 融資プラットフォームの責任を負うこと。 参加者は「非能力」のレンズを通して、ネガティブな経験をよく見ます。 ローンプラットフォームに障害があったとしても、自分自身に責任を負わせます。 彼らは資産要件のない融資を強引にしていたようで いかなるルールや戦術も、彼らの学習に基づいて再発見された捕食的融資プラットフォームからの虐待の生き残りであるジャスティエブル・P23であった。 しかし、友人が提案したように、私はそれを取りました。[...]私は、ダイカルト時代にローンを返済できるかどうか考えませんでした。 コロナは私にとても良い教訓を教えてくれた。
0.59
Other negative experiences included losing money to fake apps, or being rejected by loan platforms without due explanations.
Contrary to normative expectations of recourse, such negative technological experiences induced feelings of ‘shame’ in our participants who were less likely to share such experiences with their peers or seek help.
In addition, participants’ ardent optimism in technology and a lack of condence in their technical abilities often led them to assume unfair responsibility for their negative experiences.
P22 who was condent about her creditworthiness blamed her lack of technical skills for an unexplained
彼女の信用力に腹を立てたp22は、説明できない人のために彼女の技術スキルの欠如を非難した
0.53
英語(論文から抽出)
日本語訳
スコア
FAccT ’22, June 21–24, 2022, Seoul, Republic of Korea
FAccT'22, 6月21-24, 2022, ソウル, 大韓民国
0.85
Ramesh et al loan rejection: “Maybe I made some mistake while typing.
Rameshら 融資拒否: “タイプ中に何か間違えたかもしれない。
0.43
Because if they look at my PAN card, they will denitely give me a loan.
なぜなら、彼らが私のPANカードを見ると、彼らはうっかり私にローンをくれるからです。
0.58
So I feel that there must be some kind of mistake that I made.
ですから私は,私が犯した何らかの間違いがあるに違いないと感じています。
0.48
" Unfortunately, for participants with futuristic outlook on technology, negative experiences reinforced their beliefs that they would never be the intended audience for ‘high tech’ applications, resulting in technology abandonment. As P2 put it, the doors to an AI-powered future remained closed to them: “I felt that this gate had closed for me. I felt I shouldn’t go around and ask for money, or on these apps." 6 DISCUSSION Our work lls a critical gap in the research on algorithmic accountability: we provide an understanding of social conditions of accountability through the experiences of (potentially) vulnerable users who are constrained in their capacity to seek accountability from technology providers.
" Unfortunately, for participants with futuristic outlook on technology, negative experiences reinforced their beliefs that they would never be the intended audience for ‘high tech’ applications, resulting in technology abandonment. As P2 put it, the doors to an AI-powered future remained closed to them: “I felt that this gate had closed for me. I felt I shouldn’t go around and ask for money, or on these apps." 6 DISCUSSION Our work lls a critical gap in the research on algorithmic accountability: we provide an understanding of social conditions of accountability through the experiences of (potentially) vulnerable users who are constrained in their capacity to seek accountability from technology providers. 訳抜け防止モード: 「あいにく、未来的な技術観を持つ参加者にとって、否定的な経験は、彼らが決して「ハイテク」なアプリケーションのために意図されたオーディエンスではないという信念を補強した。 テクノロジーの放棄に繋がるのです p2が言うように、ai駆動の未来への扉は彼らにとって閉ざされたままだった。 6 つの議論 私たちの仕事は、アルゴリズムによる説明責任の研究において重大なギャップをもたらす:我々は、(潜在的に)脆弱なユーザーの経験を通して、説明責任の社会的状況を理解する。 技術提供者による説明責任を求める能力に 制約されています
0.76
We situate these ndings in the larger discourse on algorithmic accountability, and provide some suggestions for contextualizing the design of accountability mechanisms.
Accountability Current discourse on algorithmic accountability rests on the existence of accountability relationships between technology providers responsible for causing harm through ADS, and the individuals experiencing harm through ADS (or their representatives) [121].
In this relationship, the technology providers are obligated to provide ‘accounts’ to the those individuals who are harmed [15, 96, 101, 123]; these individuals or their representatives may then judge the accounts and seek to impose consequences if necessary.
Consequently, much work in algorithmic accountability often presents ‘sharing of information’ by technology providers as the rst phase of accountability [53, 64, 89, 121].
Prior work calls for involving aected individuals in designing accountability mechanisms to ensure that the information is meaningful to them [42, 75].
Our work extends this argument to show that purely technical approaches to accountability obscure the socio-political realities of stakeholders that make such ‘information sharing’ necessary in the rst place.
In our study, exchanges enabled by AI-based instant loans recongured users’ relations to instant loan platforms in ways that distract from the goals of algorithmic accountability.
Users fullled these obligations in both material and intangible ways, and persisted despite human and other costs, such as abuse, discrimination, recurring debts, privacy harms, and self-harm to them.
Contrary to the normative behaviors of outrage in users documented from work in the West [101], users in our study did not believe it was in their right to
Instead, they assumed responsibility for their failures of loan platforms, thus demonstrating a dependence, and releasing those high-powered actors from the obligations of accountability.
Thus, we argue that algorithmic accountability is mediated through platform-user power relations, and can be stymied by on-the-ground socio-political conditions of users.
We need more research on the relationship between accountability mechanisms, agency of users, and the impetus for action in dierent socio-political contexts to ensure responsible AI more widely.
We build on the work of Katell et al [74], and propose a situated approach to algorithmic accountability.5 6.1.1 Enhancing agency of the forum through critical awareness.
我々は,katell et al [74] の研究を基礎として,批判的認識を通じて,フォーラムのアルゴリズム的説明責任に関するアプローチを提案する。
0.59
New internet users, with vastly dierent mental models of AI can place misguided trust in ADS [82, 93, 109].
Such high user-trust in AI systems played out in several ways in our study: ready acceptances of terms, conditions, and loan decisions, often to the extent of users reevaluating their own competencies and abilities.
However, design and research in user-centered AI often assumes low trust in AI, and begins with questions of ‘how might we design for increased user trust in AI’?
However, such measures must be complemented by widespread AI literacy programs.
しかし、このような措置は幅広いAIリテラシープログラムによって補完されなければならない。
0.47
Trust and safety initiative for users in India by Google is one such example [66].
Googleによるインドユーザーの信頼と安全のイニシアチブは、[66]のような例だ。
0.77
More support must be given to grassroots organizations that are working to raise public awareness.
公衆の意識を高めるために作業している草の根組織に対して、より多くの支援をしなければならない。
0.42
An outstanding example is Internet Freedom Foundation’s Project Panoptic that is raising awareness on public-facing facial recognition systems in India [67].
internet freedom foundationのproject panopticは、インドの顔認識システムに対する意識を高めている[67]。 訳抜け防止モード: 注目すべき例は、Internet Freedom FoundationのProject Panopticだ。 インドでは、顔認識システムが公の場で認知度を高めている[67]。
0.70
Such eorts must be supported by programs that not only up-skill citizens to be AI designers, but also critical thinkers who can be AI testers and AI auditors.
However, lack of technical expertise among the public could render such transparency meaningless.
しかし、技術知識の欠如は、そのような透明性を無意味にする可能性がある。
0.50
Thus, corporate actors and governments must work with civil advocacy groups to create toolkits that consumer advocates can use towards accountability eorts.
The Algorithmic Equity toolkit by ACLU Washington could serve as a model for such aims [74].
ACLU Washington の Algorithmic Equity ツールキットは,そのような目的 [74] のモデルとして機能する。
0.72
Further, for transparency to serve the goal of answerability, it must generate sucient pressure from the forum that forces actors to respond to violations.
When platform-user 5We use situated accountability dierently from that of Henriksen et al , who refer to the need for situating accountability policies in practices of designers’ and engineers working on the development of AI systems [62]
Through our study, we saw that new internet users are often ashamed of their negative experiences, making it unlikely for them to share their experiences with other users oine.
Ahmed et al’s Protibadi [13], a system to mobilize support against sexual harassment in Bangaladesh, and Irani and Silberman’s Turkopticon [68] to invert requester-turk worker power relations are examples of intervention opportunities for researchers interested in algorithmic accountability.
ahmed et al’s protibadi [13]、バンガレーシュにおけるセクシャルハラスメントに対する支援を動員するシステム、イランとシルバーマンのturkopticon [68]、要求者-ターク労働者の権力関係を逆転させるシステムは、アルゴリズム的説明責任に関心のある研究者にとって介入の機会の例である。 訳抜け防止モード: ahmed et al のprotibadi [13]は、バンガラデシュにおけるセクシャルハラスメントに対する支援を動員するシステムだ。 そしてイランとシルバーマンのturkopticon[68] to invert requester - turk worker power relationsは、アルゴリズム的説明責任に関心を持つ研究者にとっての介入の機会の例だ。
0.69
6.1.3 Re-configuring designer-user relations through community engagement.
6.1.3 コミュニティエンゲージメントによるデザイナ-ユーザ関係の再構成。
0.49
Algorithmic harms such as bias and discrimination are extensively studied in FAccT, and receive extensive attention especially in Western media [26, 61, 87, 123].
Rather, they expressed signicant concerns about alternate forms of harms from ADS systems such as data leaks, gossip in social circles from data leaks, reputation damage and social frictions.
Prior work has already pointed to the need to re-contextualize harm measurements [109].
以前の研究では既に、危害の測定を再コンテキスト化する必要性が指摘されていた [109]。
0.52
We extend this argument and draw on work by Metcalf et al to suggest that we must co-construct measurements of harms with the community of stakeholders involved [88].
While doing so, we must also recognize that a purely computation framing of harms fails to address injustices caused by structural oppression [63, 77].
Design Beku [104] is an excellent model for how this could be done.
Design Beku [104]は、これを実現するための優れたモデルです。
0.83
6.1.4 Commiing to justice through critical self-reflection.
6.1.4 批判的自己反射による正義への転換。
0.44
Users behaviors towards AI-based predatory applications including justication, tolerance, acceptance and self-blame led to extreme consequences such as abuse, reputation harm and self-harm.
How far we can go in addressing human rights’ issues with technical interventions?
技術的な介入によって人権問題にどこまで対処できるのか?
0.71
What does accountability mean when predatory lenders create mobile applications with open-sourced machine learning algorithms and datasets, and slap a usable interface to prey on vulnerable users?
‘Alternative lending’ uses mobile phone data to solve information asymmetry problems of lenders, who traditionally depend on tangible collateral assets [57].
Such models could also carry huge benets to borrowers: As we saw in our study, they could open up opportunities for users who have never been a part of formal nancial systems.
Unfortunately, alternative lending could also have extreme downsides; without regulation or rules to dene the limits of what counts as ‘alternative data’, the judgements made based on these data are largely arbitrary.
In addition, new internet users in the Global South (such as the users in our study), may overshare sensitive data in the name of high quality collateral assets to unveried platforms, risking privacy harms.
Current techniques around privacy, data rights and data sovereignty rarely account for data as collateral assets, calling for research to re-frame designs around privacy, safety and trust.
For instance, loan platforms reproduced gender relations prevalent in economic and social spheres when women participants used their husbands’ phones to seek loans.
If the goal of AI-based lending is to achieve equitable nancial inclusion, we must account for such data disparities in our imaginations of AI systems.
Further, data collection mechanisms may be predatory.
さらに、データ収集機構は捕食性である。
0.74
Users in our study reported receiving ads on their phones even when they were unsuccessful with the apps, or several months after they had stopped using the apps.
While one could argue that such predatory mechanisms could be curtailed with better user privacy, we remind the reader that giving consent and accepting privacy policies were unparalleled obligations to nancially stressed users in comparison to ‘instant’ cash.
Such practices include sourcing data responsibly i.e., ensuring that users’ personally identifying information is protected at all times, preparing a data-maintenance plan for the life-cycle of the product, collecting routine user feedback, aligning feedback with model improvements, and communicating the value and time-to-impact to users, identifying factors that go into user trust, helping users calibrate their trust, calibrating trust through the product experience, and managing inuence on user decisions [15, 96, 111].
Such practices include sourcing data responsibly i.e., ensuring that users’ personally identifying information is protected at all times, preparing a data-maintenance plan for the life-cycle of the product, collecting routine user feedback, aligning feedback with model improvements, and communicating the value and time-to-impact to users, identifying factors that go into user trust, helping users calibrate their trust, calibrating trust through the product experience, and managing inuence on user decisions [15, 96, 111]. 訳抜け防止モード: このようなプラクティスには、責任あるデータのソーシング、すなわち、ユーザの個人識別情報が常に保護されていることを保証することが含まれる。 データの準備 - ライフサイクルのメンテナンス計画 - 製品のサイクル。 定期的なユーザフィードバックの収集、モデルの改善によるフィードバックの整合。 ユーザに影響を与える価値と時間を伝えるのです。 ユーザーの信頼に影響を及ぼす要因を特定し ユーザーの信頼を調整します プロダクトエクスペリエンスを通じて信頼を調整し、ユーザ決定の不正さを管理する[15,96,111]。
0.66
We also call on designers to supplement these eorts with awareness campaigns on data and privacy rights for vulnerable users.
Beyond these implications, our work opens up policy questions such as: How do we communicate the potential risks of ‘instant’ money to users in dire circumstances?
By situating our ndings in the algorithmic accountability discourse, we presented an argument that algorithmic accountability is mediated through platform-user power relations, and can be hindered by on-the-ground socio-political conditions of users.
We proposed situated accountability interventions such as enhancing agency of the forum, enabling collective transparency, reconguring designer-user relations, and committing to critical self-reection to ensure wider accountability.
We conclude with implications for FinTech applications in India and beyond.
我々は、FinTechのインド以降のアプリケーションに影響を及ぼすと結論付けている。
0.61
ACKNOWLEDGMENTS We thank Azhagu Meena S P for assisting with interviews, and Vinodkumar Prabhakaran, Nikola Banovic, Jane Im, Nel Escher and Anindya Das Antar for helpful feedback on this work.
ACKNOWLEDGMENTS Azhagu Meena S P氏、Vinodkumar Prabhakaran氏、Nikola Banovic氏、Jane Im氏、Nel Escher氏、Anindya Das Antar氏によるインタビューの協力に感謝します。 訳抜け防止モード: ACKNOWLEDGMENTS Azhagu Meena S P氏へのインタビューに感謝します。 and Vinodkumar Prabhakaran, Nikola Banovic, Jane I m, Nel Escher そしてAnindya Das Antar氏は、この仕事について役立つフィードバックを求めている。
0.72
We also thank the reviewers at CHI’22 where a previous draft was rst submitted, and the reviewers of FAccT for their helpful comments.
In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems.
SIGCHI Conference on Human Factors in Computing Systems に参加して
0.69
2695–2704.
2695–2704.
0.35
[14] Saleema Amershi.
[14]サリーマ・アメルシ。
0.45
2020. Toward Responsible AI by Planning to Fail.
2020. 失敗計画による責任あるAIを目指して
0.59
In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.
第26回ACM SIGKDD国際知識発見・データマイニング会議に参加して
0.67
3607–3607.
3607–3607.
0.35
[15] Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N Bennett, Kori Inkpen, et al 2019.
15] Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N Bennett, Kori Inkpen, al 2019。
0.39
Guidelines for human-AI interaction.
人間とAIの相互作用に関するガイドライン。
0.56
In Proceedings of the 2019 chi conference on human factors in computing systems.
2019 chi conference on human factors in computing systemsの開催中である。
0.80
1–13. [16] Amsterdam.
1–13. [16]アムステルダム。
0.52
2020. Algorithmic Register Amsterdam.
2020. アムステルダムのアルゴリズム登録。
0.59
https://algoritmereg ister.
https://algoritmereg ister.com/。
0.28
amsterdam.nl/en/ai-r egister/
amsterdam.nl/en/ai-r egister/
0.11
FAQView.aspx?
FAQView.aspx?
0.35
Id=92 https://www.dhani.co m/
Id=92 https://www.dhani.co m/
0.40
[2] 2022.
[2] 2022.
0.42
Dhani - India’s Trusted Site for Finance, Healthcare and Online Medicines.
Dhani - インドの金融・医療・オンライン医療のための信託サイト。
0.81
[3] 2022.
[3] 2022.
0.43
Five ways that AI augments FinTech.
AIがFinTechを増強する5つの方法。
0.65
https://indiaai.gov. in/article/ve-
https://indiaai.gov. in/article/sve-
0.15
ways-that-ai-augment s-ntech
ways (複数形 wayss)
0.14
[4] 2022.
[4] 2022.
0.43
Get line of credit up to Rs. 5 Lakhs - MoneyTap.
クレジットはRs.5 Lakhs - MoneyTap。
0.51
https://www.moneytap .
https://www.moneytap .com
0.26
[17] Mike Ananny and Kate Crawford.
マイク・アニニーとケイト・クロウフォード。
0.56
2018. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability.
2018. 知ることなく見る:透明性の理想とそのアルゴリズム的説明責任への応用の限界。
0.59
new media & society 20, 3 (2018), 973–989.
ニューメディア&ソサエティ 20, 3 (2018), 973-989。
0.72
[18] Varsha Bansal.
ヴァラーシャ・バンサル(Varsha Bansal)。
0.54
2021. Shame, suicide and the dodgy loan apps plaguing Google’s Play Store.
2018. ’It’s Reducing a Human Being to a Percentage’ Perceptions of Justice in Algorithmic Decisions.
2018. アルゴリズムによる判断において、人間は正義に対するパーセンテージの認識に還元される。
0.51
In Proceedings of the 2018 Chi conference on human factors in computing systems.
2018 chi conference on human factors in computing systemsの開催中である。
0.79
1–14. [22] Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai.
1–14. [22]Tolga Bolukbasi、Kai-Wei Chang、James Y Zou、Venkatesh Saligrama、Adam T Kalai。
0.37
2016. Man is to computer programmer as woman is to homemaker?
2016. 男性はコンピュータープログラマーであり、女性はホームメイカーですか?
0.59
debiasing word embeddings.
単語の埋め込みを嫌う
0.56
Advances in neural information processing systems 29 (2016), 4349–4357.
ニューラル情報処理システム29 (2016), 4349–4357。
0.65
[23] Danah Boyd and Kate Crawford.
23]ダナ・ボイドとケイト・クロフォード
0.56
2012. Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon.
2012. ビッグデータに対する批判的疑問: 文化的、技術的、学術的な現象に対する挑発。
0.54
Information, communication & society 15, 5 (2012), 662–679.
情報・コミュニケーション・社会15, 5 (2012), 662–679。
0.73
[24] Virginia Braun and Victoria Clarke.
24] ヴァージニア・ブラウンと ビクトリア・クラーク
0.61
2012. Thematic analysis.
2012. テーマ分析。
0.51
(2012). [25] Anna Brown, Alexandra Chouldechova, Emily Putnam-Hornstein, Andrew Tobin, and Rhema Vaithianathan.
(2012). Anna Brown氏、Alexandra Chouldechova氏、Emily Putnam-Hornstein氏、Andrew Tobin氏、Rhema Vaithianathan氏。
0.60
2019. Toward algorithmic accountability in public services: A qualitative study of aected community perspectives on algorithmic decision-making in child welfare services.
2017. Locating the Internet in the Parks of Havana.
2017. ハバナの公園でインターネットを 見つけました
0.48
In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems.
2017 CHI Conference on Human Factors in Computing Systems に参加して
0.72
3867–3878.
3867–3878.
0.35
[42] Upol Ehsan, Q Vera Liao, Michael Muller, Mark O Riedl, and Justin D Weisz.
Upol Ehsan氏、Q Vera Liao氏、Michael Muller氏、Mark O Riedl氏、Justin D Weisz氏。
0.35
2021. Expanding explainability: Towards social transparency in ai systems.
2021. 説明可能性の拡大:aiシステムの社会的透明性に向けて。
0.51
In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems.
2021 CHI Conference on Human Factors in Computing Systems に参加して
0.71
1–19. [43] Upol Ehsan, Samir Passi, Q Vera Liao, Larry Chan, I Lee, Michael Muller, Mark O Riedl, et al 2021.
1–19. [43] Upol Ehsan, Samir Passi, Q Vera Liao, Larry Chan, I Lee, Michael Muller, Mark O Riedl, et al 2021。 訳抜け防止モード: 1–19. 43] ウコール・エーサン、サミル・パッシ、q・ベラ・リアオ、 ラリー・チャン リー マイケル・ミュラー マーク・オ・リーデル 2021年。
0.42
The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations.
who in explainable ai: aiの背景がaiの説明に対する認識をどのように形作るか。
0.54
arXiv preprint arXiv:2107.13509 (2021).
arxiv プレプリント arxiv:2107.13509 (2021)
0.45
英語(論文から抽出)
日本語訳
スコア
How Platform-User Power Relations Shape Algorithmic Accountability
プラットフォーム・ユーザ・パワー・リレーションシップのアルゴリズム・アカウンタビリティ
0.50
FAccT ’22, June 21–24, 2022, Seoul, Republic of Korea
FAccT'22, 6月21-24, 2022, ソウル, 大韓民国
0.85
for pensioners. [44] MC Elish and EA Watkins.
年金担当。 [44]mc・エリッシュとea・ワトキンス
0.51
2020. Repairing innovation: A study of integrating
2020. イノベーションの修復:統合に関する研究
0.61
AI in clinical care.
臨床医療におけるAI。
0.68
Unpublished Manuscript (2020).
未発表(2020年)。
0.32
[45] Isil Erel, Lea H Stern, Chenhao Tan, and Michael S Weisbach.
Isil Erel氏、Lea H Stern氏、Chenhao Tan氏、Michael S Weisbach氏。
0.31
2018. Could machine learning help companies select better board directors?
2018. 機械学習は、企業がより良い取締役を選ぶのに役立つか?
0.50
Harvard Business Review 1, 5 (2018).
ハーバード・ビジネス・レビュー1号(2018年)。
0.43
[46] Nel Escher and Nikola Banovic.
ネル・エッシャーとニコラ・バノヴィッチ
0.32
2020. Exposing Error in Poverty Management Technology: A Method for Auditing Government Benets Screening Tools.
2020. 貧困管理技術におけるエラーの露呈: 政府のベネオッツスクリーニングツールの監査方法。
0.58
Proceedings of the ACM on Human-Computer Interaction 4, CSCW1 (2020), 1–20.
acm on human-computer interaction 4, cscw1 (2020), 1~20。
0.69
[47] Motahhare Eslami, Aimee Rickman, Kristen Vaccaro, Amirhossein Aleyasen, Andy Vuong, Karrie Karahalios, Kevin Hamilton, and Christian Sandvig.
[47]Motahhare Eslami, Aimee Rickman, Kristen Vaccaro, Amirhossein Aleyasen, Andy Vuong, Karrie Karahalios, Kevin Hamilton, Christian Sandvig。 訳抜け防止モード: [47 ]Motahhare Eslami, Aimee Rickman, Kristen Vaccaro, Amirhossein Aleyasen, Andy Vuong, Karrie Karahalios, Kevin Hamilton クリスチャン・サンドヴィヒとも。
0.76
2015. " I always assumed that I wasn’t really that close to [her]"
2015. 「私はいつもあまり親しくなかったと仮定した.」
0.45
Reasoning about Invisible Algorithms in News Feeds.
ニュースフィードにおける可視性アルゴリズムの推論
0.70
In Proceedings of the 33rd annual ACM conference on human factors in computing systems.
第33回コンピュータシステムにおけるヒューマンファクターに関する年次ACM会議の開催報告
0.71
153–162. [48] Motahhare Eslami, Kristen Vaccaro, Min Kyung Lee, Amit Elazari Bar On, Eric Gilbert, and Karrie Karahalios.
153–162. [48]Motahhare Eslami, Kristen Vaccaro, Minkyung Lee, Amit Elazari Bar On, Eric Gilbert, Karrie Karahalios。 訳抜け防止モード: 153–162. [48]モタヒア・エスラミ、クリステン・ヴァッカロ、ミン・キュン・リー アミット・エラザリ・バーオン、エリック・ギルバート、キャリー・カラハリオス。
0.38
2019. User attitudes towards algorithmic opacity and transparency in online reviewing platforms.
2019. オンラインレビュープラットフォームのアルゴリズム不透明性と透明性に対するユーザの態度
0.56
In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems.
2019 chi conference on human factors in computing systemsの開催中である。
0.72
1–14. [49] ET Goverment.
1–14. [49]ET Goverment。
0.34
2021. Odisha launches AI based online life certicate system
2021. odishaがaiベースのオンラインライフサーチケートシステムをローンチ
0.52
[50] Virginia Eubanks.
バージニア・ユーバンクス(Virginia Eubanks)
0.54
2018. Automating inequality: How high-tech tools prole, police,
2018. 不平等の自動化:ハイテクツールがいかに広まるか、警察
0.55
and punish the poor.
貧しい人を罰するのです
0.63
St. Martin’s Press.
st. martin’s pressの略。
0.55
[51] Simson Garnkel, Jeanna Matthews, Stuart S Shapiro, and Jonathan M Smith.
51]シムソン・ガルンケル、ジャンナ・マシューズ、スチュアート・シャピロ、ジョナサン・m・スミス。
0.44
2017. Toward algorithmic transparency and accountability.
2017. アルゴリズムの透明性と説明責任に向けて
0.52
[52] Susan Wharton Gates, Vanessa Gail Perry, and Peter M Zorn.
52]スーザン・ウォートン・ゲイツ、ヴァネッサ・ゲイル・ペリー、ピーター・m・ゾーン
0.66
2002. Automated underwriting in mortgage lending: Good news for the underserved?
2002. ローン融資における自動引受--未払い者にとって朗報?
0.49
Housing Policy Debate 13, 2 (2002), 369–391.
住宅政策論争13件(2002年)、369-391件。
0.71
[53] Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford.
Timnit Gebru氏、Jamie Morgenstern氏、Briana Vecchione氏、Jennifer Wortman Vaughan氏、Hanna Wallach氏、Hal Daumé Iii氏、Kate Crawford氏。 訳抜け防止モード: [53 ]Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii そしてケイト・クロフォード。
0.80
2021. Datasheets for datasets.
2021. データセット用のデータシート。
0.50
Commun. ACM 64, 12 (2021), 86–92.
共産。 ACM 64, 12 (2021), 86-92。
0.61
[54] Tarleton Gillespie.
[54] タールトン・ギレスピー
0.65
2014. The relevance of algorithms.
2014. アルゴリズムの関連性。
0.53
Media technologies: Essays
メディア技術:エッセイ
0.75
on communication, materiality, and society 167, 2014 (2014), 167.
コミュニケーション、物質性、社会について 167, 2014 (2014), 167。
0.75
[55] Leilani H Gilpin, David Bau, Ben Z Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal.
55] Leilani H Gilpin氏、David Bau氏、Ben Z Yuan氏、Ayesha Bajwa氏、Michael Specter氏、Lalana Kagal氏。
0.83
2018. Explaining explanations: An overview of interpretability of machine learning.
2018. 説明説明: 機械学習の解釈可能性の概要。
0.59
In 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA).
2018年、IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA)が開催された。
0.71
IEEE, 80–89.
IEEE 80-89。
0.42
[56] Government of Tamil Nadu.
[56]タミル・ナドゥの行政。
0.51
2020. Tamil Nadu Safe and Ethical Articial Intelli-
2020. タミル・ナードゥの安全と倫理的知性-
0.45
gence Policy. Technical Report.
ジェンス政策。 技術報告。
0.51
[57] Darrell Grissen et al 2019.
57] Darrell Grissen et al 2019。
0.36
Behavior Revealed in Mobile Phone Usage Predicts
携帯電話利用予測で明らかになった行動
0.72
Loan Repayment. Technical Report.
ローン返済。 技術報告。
0.49
arXiv. org. [58] Varun Gulshan, Lily Peng, Marc Coram, Martin C Stumpe, Derek Wu, Arunachalam Narayanaswamy, Subhashini Venugopalan, Kasumi Widner, Tom Madams, Jorge Cuadros, et al 2016.
arXiv。 と。 Varun Gulshan, Lily Peng, Marc Coram, Martin C Stumpe, Derek Wu, Arunachalam Narayanaswamy, Subhashini Venugopalan, Kasumi Widner, Tom Madams, Jorge Cuadros, al 2016
0.36
Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs.
網膜眼底写真における糖尿病網膜症検出のための深層学習アルゴリズムの開発と検証
0.85
Jama 316, 22 (2016), 2402–2410.
ジャマ316, 22 (2016), 2402–2410。
0.81
[59] M Haataja, L van de Fliert, and P Rautio.
59]M Haataja, L van de Fliert, P Rautio。
0.30
2020. Public AI Registers: Realising AI
2020. パブリックAI登録:AIの実現
0.57
transparency and civic participation in government use of AI.
AIの政府利用における透明性と市民参加。
0.78
(2020). [60] Alexa Hagerty and Igor Rubinov.
(2020). 60]alexa hagertyとigor rubinov。
0.46
2019. Global AI ethics: a review of the social impacts and ethical implications of articial intelligence.
2019. global ai ethics: 芸術的知性の社会的影響と倫理的影響のレビュー。
0.54
arXiv preprint arXiv:1907.07892 (2019).
arXiv preprint arXiv:1907.07892 (2019)。
0.76
[61] Sara Hajian, Francesco Bonchi, and Carlos Castillo.
[61]サラ・ハジアン、フランチェスコ・ボンチ、カルロス・カスティーリョ。
0.56
2016. Algorithmic bias: From discrimination discovery to fairness-aware data mining.
2016. アルゴリズムバイアス:識別発見から公正なデータマイニングまで。
0.55
In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining.
第22回知識発見・データマイニング国際会議(acm sigkdd international conference on knowledge discovery and data mining)に出席。
0.58
2125–2126.
2125–2126.
0.35
[62] Anne Henriksen, Simon Enni, and Anja Bechmann.
[62]アン・ヘンリクセン、サイモン・エンニ、アンジャ・ベクマン
0.51
2021. Situated accountability: Ethical principles, certication standards, and explanation methods in applied AI.
In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems.
In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems
0.37
1–11. [78] Bran Knowles, Lynne Blair, Mike Hazas, and Stuart Walker.
1–11. 78] ブラン・ノウルズ、リン・ブレア、マイク・ハザス、スチュアート・ウォーカー。
0.42
2013. Exploring sustainability research in computing: where we are and where we go next.
2013. コンピューティングにおける持続可能性研究の探求:我々はどこにいて、次にどこへ行くのか。
0.49
In Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing.
2013年のACM国際コンファレンス・カンファレンス「Proceedings of the 2013 ACM International joint conference on Pervasive and ubiquitous computing」に参加。
0.57
305–314. [79] Steinar Kvale.
305–314. 79]Steinar Kvale氏。
0.50
2008. Doing interviews.
2008. インタビューをする。
0.61
Sage. [80] Ms Amina Lahreche, Ms Sumiko Ogawa, Ms Kimberly Beaton, Purva Khera, Majid Bazarbash, Mr Ulric Eriksson von Allmen, Ms Ratna Sahay, et al 2020.
セージ。 アミーナ・ラーレシュさん、小川純子さん、キンバリー・ビートンさん、Purva Kheraさん、Mageid Bazarbashさん、Ulric Eriksson von Allmenさん、Ratna Sahayさん、そして2020年。
0.51
The Promise of Fintech: Financial Inclusion in the Post COVID-19 Era.
ファイナンシャル・インクルージョン(Financial Inclusion in the Post COVID-19)の略。
0.56
Technical Report. International Monetary Fund.
技術報告。 国際通貨基金。
0.51
[81] Min Kyung Lee, Daniel Kusbit, Evan Metsky, and Laura Dabbish.
[81]ミン・キュン・リー、ダニエル・カスビット、エヴァン・メトスキー、ローラ・ダビッシュ
0.42
2015. Working with machines: The impact of algorithmic and data-driven management on human workers.
2015. マシンによる作業: アルゴリズムとデータ駆動管理が人間の作業者に与える影響。
0.61
In Proceedings of the 33rd annual ACM conference on human factors in computing systems.
第33回コンピュータシステムにおけるヒューマンファクターに関する年次ACM会議の開催報告
0.71
1603–1612.
1603–1612.
0.35
[82] Min Kyung Lee and Katherine Rich.
[82]ミン・ヨン・リーとキャサリン・リッチ。
0.54
2021. Who Is Included in Human Perceptions of AI?
2021. AIの人間知覚に誰が含まれているのか?
0.53
: Trust and Perceived Fairness around Healthcare AI and Cultural Mistrust.
医療aiと文化的不信をめぐる信頼と公正感
0.54
In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems.
2021 CHI Conference on Human Factors in Computing Systems に参加して
0.71
1–14. [83] Q Vera Liao, Daniel Gruen, and Sarah Miller.
1–14. 83] Q Vera Liao、Daniel Gruen、Sarah Miller。
0.32
2020. Questioning the AI: informing design practices for explainable AI user experiences.
2021. A survey on bias and fairness in machine learning.
2021. 機械学習におけるバイアスと公平性に関する調査
0.59
ACM Computing Surveys (CSUR) 54, 6 (2021), 1–35.
ACM Computing Surveys (CSUR) 54, 6 (2021), 1–35。
0.83
[88] Jacob Metcalf, Emanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, and Madeleine Clare Elish.
Jacob Metcalf氏、Emanuel Moss氏、Elizabeth Anne Watkins氏、Ranjit Singh氏、Madeleine Clare Elish氏。
0.64
2021. Algorithmic impact assessments and accountability: The co-construction of impacts.
2021. アルゴリズムによる影響評価と説明責任:影響の共構築。
0.62
In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency.
2021年 ACM Conference on Fairness, Accountability, and Transparency に参加して
0.74
735–746. [89] Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru.
In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society.
2019 AAAI/ACM Conference on AI, Ethics, and Society に参加して
0.80
429–435. [102] Inioluwa Deborah Raji, Andrew Smart, Rebecca N White, Margaret Mitchell, Timnit Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, and Parker Barnes.