
— AI & job vulnerability
Interactive “Chatbots @ Work” dashboard for beta release in late 2025

Maiden Labs is building a suite of intelligence tools to offer unions and other worker advocates real-time, neutral data about AI models’ actual capabilities and risks across the workforce. In late 2025, we will launch an interactive dashboard on job vulnerability to AI. Additional tools in development focus on emerging risk intelligence on the latest real-world reports of AI harm, lawsuits that centrally involve AI, and financial reports of public companies’ use of AI.
— AI & Jobs Dashboard
How much real work can AI do, and how well? Accurate, actionable data on AI capabilities across real jobs is the first step in answering this question. Maiden Labs has developed the first interactive tool for measuring real-time, real-world job vulnerability to large language models (LLMs). Our approach combines the two leading datasets on LLM benchmarks and real employment surveys to granularly measure AI capability at the task level.
-
First, we draw on Stanford’s Holistic Evaluation of Language Models (HELM) project↗︎, which hosts over 3.7 million question-answer grades across hundreds of AI models, to develop a comprehensive accounting of what AI “can do” at the most granular possible level.
-
We then map this to data on over 19,000 real-world job tasks from O*NET↗︎, the world’s largest set of job-specific data with detailed inputs from real professionals, to determine how well job tasks are represented by today’s AI benchmarks, and how well AI performs on job-relevant tasks.
-
Bringing these data together—using shared skill, knowledge, and ability labels, along with other task-specific annotations on automation potential—we offer the first interactive, test-driven tool for tailored analysis on the frontiers of widespread AI job disruption.
