When people search for a dentist, physiotherapist, or cosmetic clinic, many now rely on generative AI rather than traditional search engines. A new independent study suggests that the way these systems recommend clinics follows consistent patterns that may subtly shape how people interpret healthcare options.
The research was conducted by José Francisco Ouviña and focuses on how generative AI systems structure their answers when asked to recommend clinics in Spanish cities. The study does not assess whether the clinics mentioned are good or bad, nor does it verify the accuracy of the information provided. Instead, it examines the form and organisation of the responses users receive.
The analysis covered 1,530 responses generated in the first half of January 2026. These responses were produced by three widely used generative AI systems and were based on identical questions asked across 51 Spanish cities with populations above 100,000. The study included three common types of local healthcare services: dental clinics, physiotherapy clinics, and aesthetic medicine clinics.
Each system was asked the same fixed set of ten questions, ranging from open requests for clinic recommendations to short summary prompts. This approach allowed the researcher to compare how answers changed depending on how a question was framed, while keeping all other variables constant.
One of the clearest findings was that the wording of a question had the strongest influence on how an answer was structured. Open or comparative questions tended to generate longer responses, often organised into lists with descriptive detail. Short prompts that asked for brief explanations produced much more concise answers, sometimes just a few sentences long.
There were also clear differences between the AI systems themselves. Some consistently generated longer and more detailed text, while others favoured shorter and more compact responses. The study found recurring contrasts in the use of lists, the inclusion of warnings or limitations, references to external sources, and the use of first person language.
The type of clinic being recommended played a more limited role. While there were some variations in how often practical information or cautionary notes appeared for different clinic types, these differences were relatively small. Compared with the influence of question design and system choice, the clinical category had a much weaker effect on response structure.
Crucially, the study makes clear what it does not do. It does not analyse clinical quality, reputation, patient outcomes, or commercial visibility. It also does not attempt to explain how AI systems decide which clinics to mention. The findings describe observable patterns in generated text, not the underlying logic or intent of the systems.
As generative AI becomes a routine tool for local health searches, these patterns matter. Longer, well structured responses may appear more authoritative or trustworthy to users, even when no quality assessment is involved. Shorter answers may feel less informative, despite offering similar content in a different format.
By documenting how generative AI systems consistently shape clinic recommendations, the study provides an early framework for understanding how form, rather than substance, can influence perception in digital healthcare searches.

