Cool topic, @rosie! I have no experience with this so I’ll throw a dart without any idea if a target exists.
When I first read this, I thought it would be simple to verify the technology that makes autocomplete operate. When I read Jeremy’s list, I realized this testing is more than delivering the technology - it’s more about delivering a valuable service. I started to wonder what defines the value.
Certainly, as Jeremy suggests, some logical ordering and some focus on what the user has already provided should drive the contents of autocomplete list. To me, this becomes almost rule-based and I could craft tests to evaluate the rules. I’m still left with “Is the utility providing the right content for the autocomplete list?” and now asking what is the right content.
Technical solutions that refine the list might be
- Reviewing sentence context
- Autocomplete history for this string sequence and its synonyms
As the complexity evolves to provide “value” and “content”, the testing could become a drudgery of checking autocomplete lists. Clearly, I could, through automation, present a dictionary of strings and words to the autocomplete component and log the results. For a finite set of strings and words, I could determine the behavior of autocomplete narrowly meets “value” and “content” intents. With an agile mindset, that would be a deploy-able product.