Behind Google’s claim that its “What People Suggest” feature was removed as part of a routine search simplification lies a story about the risks of deploying AI in health-sensitive search contexts. The feature organized health advice from online strangers using AI and presented it in Google Search. Three insiders confirmed its removal, and Google’s subsequent explanation left significant transparency gaps.
Unveiled at Google’s “The Check Up” health event in New York, the feature was presented by then-chief health officer Karen DeSalvo as a valuable addition to health search. She wrote that the tool would give users access to community health experiences organized by AI, complementing expert medical content. The initial rollout targeted mobile users in the US.
Google stated that safety played no role in the removal, attributing the decision instead to search page simplification. When asked for a public record of the announcement, the company cited a blog post that made no reference to the feature’s discontinuation. This apparent inconsistency has fueled speculation about the real motivations behind the decision.
The context includes a major investigation earlier this year that found Google’s AI Overviews were distributing medically inaccurate content to approximately two billion monthly users. Google removed AI Overviews for some health searches following the investigation, though health experts called the response inadequate.
The upcoming Google health event will offer a fresh opportunity for the company to present its vision for AI in healthcare. But the unresolved questions around “What People Suggest” will follow it into that event. Real credibility in health AI requires not just innovation but transparent acknowledgment of failures — something Google has yet to fully demonstrate.