Disclaimer: Google only attracts my annoyances more because I use it so much. And I'm confident they can do even better, and so I'm helping by writing this stuff down!
Back in my day, when you typed something in quotes into a search engine, you'd get an exact match! Well... OK, sometimes that meant "phrase search" or "turn off stemming"... but still, if it was only a ONE WORD query, and I took the time to still put it in quotes, then the engine knew I was being VERY specific.
But now that everyone's flying with jet-packs and hover boards, search engines have decided that they know more than I do, and so when I use quotes, they seem to ignore them!
I can't give the exact query I was using, but let's say it'd been "IS_OF". Google tries to talk me out of it, doing a "Show results for (something else)", but then I click on the "Actually do what I said" hyperlink. And even then it still doesn't. In this false example, it'd still match I.S.O.F. and even span sentence gaps, as in "Do you know that that *is*? *Of* course I do!"
The Technical Challenge:
To be fair, there's technical problems with trying to match arbitrary exact patterns of characters in a scalable way. Punctuation presents a challenge, with many options. And most engines use tokenization, which implies word breaks, which normally wouldn't handle arbitrary substring matching.
At least with some engines, if you want to support both case insensitive and case sensitive matching, you have two different indexes, with the latter sometimes being called a "casedex". Other engines allow you to generate multiple overlapping tokens within the index, so "A-B" can be stored as both separate A's and B's, and also as "AB", and also as the literal "A-B", so any form will match.
Some would say I'm really looking for the Unix "grep" command, or the SQL "LIKE" operator. And by the way, those tools a VERY inefficient because they use linear searching, instead of pre-indexing. And if you tried to have a set of indexes to handle all permutations of case matching, punctuation, pattern matching, etc, you'd wind up with a giant index, maybe way larger than the source text.
But I do think Google has moved beyond at least some of these old limitations, they DO seem to find matches that go beyond simple token indices.
Could you store an efficient, scalable set of indices that store enough information to accommodate both normal English words and complex near-regex level literal matching, and still have reasonable performance and reasonable index sizes? In other words "could you have your cake and eat it too"? Well... you'd think a multi-billion-dollar company full of Standard smarties certainly could! ;-) But then the cost would need to be justtified... and outlier use-cases never survive that scrutiny. As long as the underlying index supports finding celebrity names and lasagna recipes, and pairing them with appropriate ads, the 80% use cases are satisfied.