Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI-driven scientific advertising: ethics, visibility, and research integrity
0
Zitationen
2
Autoren
2026
Jahr
Abstract
• AI scientific platforms raise ethical concerns about paid visibility • Paid promotion can distort trust, visibility, and research integrity • Clear labeling can protect credibility in AI-driven research discovery • Quality thresholds can limit harm from promoted scientific content • Advertising revenue may support fairer scientific visibility The integration of AI-driven advertising into scientific discovery platforms like Elicit.com and Consensus.app generates a critical ethical tension: while enhancing visibility and revenue, such advertising risks conflating ad-driven prominence with scholarly merit, threatening research integrity, trust dynamics, and equitable knowledge dissemination. To resolve this conflict, a trust-centric framework grounded in signaling principles and trust-transfer mechanisms is proposed. This framework addresses tensions between paid promotions and traditional credibility signals, such as peer review and citation impact, through integrated strategies: transparency via labeled disclosures and scholarly relevance thresholds to prevent signal conflict; objectivity through algorithmic segregation and third-party audits to sustain trust transfer; and equity employing progressive bidding models and visibility reserves to counteract systemic bias. By redefining credibility assessment in hybrid scientific ecosystems and providing actionable solutions like rigor-weighted ad algorithms, the framework advances marketing scholarship while aligning monetization with academic values. Consequently, marketing emerges as the essential architect of ethical visibility, ensuring advertising amplifies scientific rigor rather than eroding it in algorithmic knowledge economies.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.545 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.436 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.935 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.589 Zit.