As the landscape of generative artificial intelligence fragments into a dozen competing architectures, the burden of choice has largely fallen on the end user. To determine whether Anthropic’s Claude outperforms Google’s Gemini or OpenAI’s GPT-4o for a specific professional task, one typically needs a suite of expensive, siloed subscriptions. LinkedIn is attempting to bridge this gap with Crosscheck, a new experimental feature that transforms the professional network into a neutral testing ground for large language models.
Currently rolling out to Premium subscribers in the United States, Crosscheck functions as a "blind taste test" for AI. A user enters a prompt and receives two side-by-side responses generated by different, undisclosed models. Only after the user selects their preferred answer does the system reveal the providers—which can include models from Amazon, Mistral, and MoonshotAI, among others. By stripping away brand names, LinkedIn aims to focus the user’s attention on the raw utility and accuracy of the output rather than the marketing surrounding specific labs.
The initiative, developed within LinkedIn Labs, also serves a broader data-gathering purpose. The platform plans to maintain a leaderboard tracking how professionals across different industries rate various models. This granular data could reveal whether legal professionals favor different linguistic nuances than software engineers or marketers, providing a rare look at how AI performance varies by sector. While currently limited to text-based prompts, Crosscheck represents a shift in LinkedIn’s strategy, positioning the platform not just as a repository for resumes, but as a utility layer for the AI-integrated workplace.
With reporting from Engadget.
Source · Engadget



