Sensor evolution is not stuck, it's been diverted.
Could we have 100mp full frame cameras today? Sure. But except for those of us obsessed with information in data capture, what would you be using such a camera for? Diffraction and other attributes start to raise issues you can't ignore. 100mp implies a 38" print, and exactly who's making such things these days? A 96mp pixel shift (e.g. 24mp sensor) deBayers the data, reduces aliasing and increases dynamic range, which would be bigger benefits for most people.
Could we have more dynamic range today? Not really. The current sensors are extremely good at rendering the randomness of photons without adding any significant electronic noise. Any dynamic range gain would likely only come through getting rid of the Bayer filtration, which is an area that is being constantly explored, but because of likely costs, we'll probably see in phones first. Even then, it might be only a stop or so.
So what is actually happening with image sensors?
Simple: they're getting faster at what they do. In particular, that comes in two basic areas: (1) analog to digital conversion; and (2) data offloading speed (both internally on the chip as well as externally from the chip). Speed has given us better focusing, higher frame rates, and blackout free viewfinders, to name three major improvements. Most of us would rather have those three things than more than 45mp or even another stop of dynamic range.
The next step is likely intelligence on sensor. We've already seen benefits from stacking electronics behind the image sensor, but so far that's mostly a one-way process aimed at increasing data speed. Nikon has a patent (and prototype sensor) that demonstrates what can happen when you apply two-way processing in the stack. That example is mostly centered on treating exposure differently in small blocks of photosites (16x16), which is another way of getting to a dynamic range improvement. But I can imagine other two-way intelligence that would impact focus systems, as well. Heck, I remember how we did white balance on the original color QuickCam, and I could see that being applied, as well.
Speed and intelligence in the stack leads to all kinds of camera improvements. Do you really need more pixels? Do you really need to more accurately record the randomness of photons? I think not, and from what I can tell, neither do the camera engineering groups designing tomorrow's cameras. Oh, don't worry, the marketing departments think the numbers game works to sell cameras, so we'll see some pixel count gains and small dynamic range boosts, because those will produce better "numbers" than yesterday's cameras. But as we've already seen with Sony's 61mp full frame image sensor, the benefits are minimal enough to be ignored by most. Again, for landscape work, I'd rather have that 24mp pixel-shifting camera than a Sony 61mp camera.