Unveiling Legitimacy in the unexpected events context : An Inquiry into Information System Consultancy companies and international organizations through Topic Modeling Analysis
- URL: http://arxiv.org/abs/2407.17509v1
- Date: Mon, 8 Jul 2024 07:44:03 GMT
- Title: Unveiling Legitimacy in the unexpected events context : An Inquiry into Information System Consultancy companies and international organizations through Topic Modeling Analysis
- Authors: Oussama Abidi,
- Abstract summary: This study focuses on the communication of two key stakeholders: IS consultancy companies and international organizations.
To achieve this objective, we examined a diverse array of publications released by both actors.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In an increasingly dynamic and modern market, the recurrence of unexpected events necessitates proactive responses from information system (IS) stakeholders. Each IS actor strives to legitimize its actions and communicate its strategy. This study delves into the realm of IS legitimation, focusing on the communication of two key stakeholders: IS consultancy companies and international organizations, particularly in the context of unexpected events. To achieve this objective, we examined a diverse array of publications released by both actors. Employing a topic modeling methodology, we analyzed these documents to extract valuable insights regarding their methods of legitimation. Through this research, we aim to contribute to the legitimation discourse literature by offering an exploration of two key IS stakeholders responding to the challenges posed by unexpected events.
Related papers
- A Cooperative Multi-Agent Framework for Zero-Shot Named Entity Recognition [71.61103962200666]
Zero-shot named entity recognition (NER) aims to develop entity recognition systems from unannotated text corpora.
Recent work has adapted large language models (LLMs) for zero-shot NER by crafting specialized prompt templates.
We introduce the cooperative multi-agent system (CMAS), a novel framework for zero-shot NER.
arXiv Detail & Related papers (2025-02-25T23:30:43Z) - Which Information should the UK and US AISI share with an International Network of AISIs? Opportunities, Risks, and a Tentative Proposal [0.0]
The UK AI Safety Institute (UK) and its parallel organisation in the United States (US) take up a unique position in the recently established International Network of jurisdictions.
This paper argues that it is in the interest of both institutions to share specific categories of information with the International Network of evaluations.
arXiv Detail & Related papers (2025-02-05T16:49:02Z) - Illusions of Relevance: Using Content Injection Attacks to Deceive Retrievers, Rerankers, and LLM Judges [52.96987928118327]
We find that embedding models for retrieval, rerankers, and large language model (LLM) relevance judges are vulnerable to content injection attacks.
We identify two primary threats: (1) inserting unrelated or harmful content within passages that still appear deceptively "relevant", and (2) inserting entire queries or key query terms into passages to boost their perceived relevance.
Our study systematically examines the factors that influence an attack's success, such as the placement of injected content and the balance between relevant and non-relevant material.
arXiv Detail & Related papers (2025-01-30T18:02:15Z) - The Imperative of Conversation Analysis in the Era of LLMs: A Survey of Tasks, Techniques, and Trends [64.99423243200296]
Conversation Analysis (CA) strives to uncover and analyze critical information from conversation data.
In this paper, we perform a thorough review and systematize CA task to summarize the existing related work.
We derive four key steps of CA from conversation scene reconstruction, to in-depth attribution analysis, and then to performing targeted training, finally generating conversations.
arXiv Detail & Related papers (2024-09-21T16:52:43Z) - LLM as a Mastermind: A Survey of Strategic Reasoning with Large Language Models [75.89014602596673]
Strategic reasoning requires understanding and predicting adversary actions in multi-agent settings while adjusting strategies accordingly.
We explore the scopes, applications, methodologies, and evaluation metrics related to strategic reasoning with Large Language Models.
It underscores the importance of strategic reasoning as a critical cognitive capability and offers insights into future research directions and potential improvements.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Unmasking the Shadows of AI: Investigating Deceptive Capabilities in Large Language Models [0.0]
This research critically navigates the intricate landscape of AI deception, concentrating on deceptive behaviours of Large Language Models (LLMs)
My objective is to elucidate this issue, examine the discourse surrounding it, and subsequently delve into its categorization and ramifications.
arXiv Detail & Related papers (2024-02-07T00:21:46Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - The Participatory Turn in AI Design: Theoretical Foundations and the
Current State of Practice [64.29355073494125]
This article aims to ground what we dub the "participatory turn" in AI design by synthesizing existing theoretical literature on participation.
We articulate empirical findings concerning the current state of participatory practice in AI design based on an analysis of recently published research and semi-structured interviews with 12 AI researchers and practitioners.
arXiv Detail & Related papers (2023-10-02T05:30:42Z) - An Uncommon Task: Participatory Design in Legal AI [64.54460979588075]
We examine a notable yet understudied AI design process in the legal domain that took place over a decade ago.
We show how an interactive simulation methodology allowed computer scientists and lawyers to become co-designers.
arXiv Detail & Related papers (2022-03-08T15:46:52Z) - AI Ethics Statements -- Analysis and lessons learnt from NeurIPS Broader
Impact Statements [0.0]
In 2020, the machine learning (ML) conference NeurIPS broke new ground by requiring that all papers include a broader impact statement.
This requirement was removed in 2021, in favour of a checklist approach.
We have created a dataset containing the impact statements from all NeurIPS 2020 papers.
arXiv Detail & Related papers (2021-11-02T16:17:12Z) - Machine Learning for Fraud Detection in E-Commerce: A Research Agenda [1.1726720776908521]
We take an organization-centric view on the topic of fraud detection by formulating an operational model of the anti-fraud departments in e-commerce organizations.
We derive 6 research topics and 12 practical challenges for fraud detection from this operational model.
arXiv Detail & Related papers (2021-07-05T12:37:29Z) - Topic-Aware Multi-turn Dialogue Modeling [91.52820664879432]
This paper presents a novel solution for multi-turn dialogue modeling, which segments and extracts topic-aware utterances in an unsupervised way.
Our topic-aware modeling is implemented by a newly proposed unsupervised topic-aware segmentation algorithm and Topic-Aware Dual-attention Matching (TADAM) Network.
arXiv Detail & Related papers (2020-09-26T08:43:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.