Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Method and Platform for Validating Human Inventorship in AI-Augmented Molecular Discovery via Iterative Audit Gates — USPTO Provisional Patent Specification
0
Zitationen
2
Autoren
2026
Jahr
Abstract
Provisional patent specification for the "Audit of Intent" (AOI) platform: a structured five-gate human-in-the-loop (HITL) methodology for governing AI-augmented molecular and biomedical discovery in a manner that produces a verifiable, timestamped record of human intellectual conception, satisfying statutory inventorship requirements under 35 U.S.C. § 101 and § 112 and USPTO 2024 guidance on AI-assisted inventions. The five gates are: (1) Axiomatic Constraint Gate — documenting pre-computational biological priors and dataset selection criteria; (2) Iterative Prompting Protocol — treating natural language directives as timestamped formal scientific protocols constituting documented evidence of human conception; (3) Phenotypic Anchoring — cross-referencing AI outputs with clinical domain expertise; (4) Functional Reification — verifying AI-derived signatures against immutable primary data repositories; and (5) Legal Certification — generating a structured Verification Note satisfying the Pannu factors for significant human contribution. The platform reconceptualizes natural language prompts as formal scientific protocols, transforming every substantive research directive into a component of the inventor's conception record. Validated across a six-paper computational research program on Systemic Sclerosis (related filing: USPTO Provisional Application No. 63/990,695), enabling identification of MEK/SRC kinase inhibitor therapeutic candidates and the Galectin-9/TIM-3 immune checkpoint biomarker. USPTO Provisional Application No. filed February 25, 2026, Confirmation No. 4333.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.485 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.371 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.827 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.549 Zit.